var/home/core/zuul-output/0000755000175000017500000000000015144627171014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144643437015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000352113715144643252020270 0ustar corecoreFikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB e> "mv?_eGbuuțx{w7ݭ7֫B% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0c̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?T.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!K*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^u I66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdck vz(vb$^Nyo$p[DtUCE9s iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5_uM Wi·yT"^'~i6֬:v~m!m|X!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YK?mW#_ΗNtwm.~ "I#['U'_HIv4(uŻ/ R#>'Hb "V:GIXEeOzJd zզHe1V%Vxw_ƏUXAT*("*)GiUQJr"Zg0ZŨxEYm{SYy"T'SmXr&Bf~j%"O`R<)**StF bqx cp[/w[ %yf Nxt2Vu{^tDS)'E!銍Y陊60͓jB/ `8 vz2SRmbtO wCk㋩.^_.km셯)ŕP4fv~lN8s\^WɄuq嫡p&hsmCK00BҤ5䦦з5YQrCKƷ5{e.3t}_,8?*%ĈAM`uOZ?=Ip^m\2uW3^PhxE:y?kO7"qrײ\[פ)i_>'S@jwzke)긊X\U ξ釾oyOw"le(ҾTZJUo"P| ɹho|װ gөj楪]jƥj]nNu9kK)'E3|uըG,(c8>oWXfimٺS [#uX}OY٦mj?EH~CycxBY~VZ=k hT&sso+Ojf2 jz}Iayf0{z*/Fs^-?=bch;tFXhضCqtSg3|7jnV 3iCB>jg8xd~DAuNד\Z~sDDJb"|ϼ(m@uGXd# ǁu|^?j`#a& %拉lXS P(H<@ "OtLI br1OLPOt֤̅\_Gㇿl7_,\ɲ_*^^)Ϻ^{Hu6 bD)Zةeb NTUEuvY",&د_`:3c`!lb%䓰j/q'mw~>MEiк\]( pQ0RL''/x>V` ŒLZB[[Kxt<ٞZ9?r0]do5<#Íd.q'艇 ѱ41l-gi UW1۱ w਍ xȣOwE7&|) q֝]<mӷ(\x+'Qf1<+ ITmjoSDQNxSF㙨j. sGD$(3YVgY Z J @D5.9'a흱\# QHrvG)+.XayV% kY-U.w' ޏ= T8&at Ξe#*J_݃4 -|= ,H=}oDna?:8%7 7M0MQ]uwqOp'Ua3pLv"}uT?4.0Q"m aOhyU$x  ya|˩'(/ 40v= /=Hl=(/ Pooh6Y h67^s!4_Q&!a1w1J,Hqq" ֮&#+tg7~f|0h h%zۨ]J4Prэ.(XXHkR]6z:Twܵ"$KXǰ=0hc8Z>80y<ؚF'eYadMLIE"'LB-i[re?$0Jq?ҨXqI9%6N-w/FdύD䗏~};Qx|xNm{RZJ<XHW1o]KF7$KAK(yܥu{/(qDSvdO8QeA+[fA(@i`ၖ% }zÃw5mkqf}ןhDaEYӇxt*cxYV)Qb"g`fWX Y2o]F JYQ75ITV3S򜚮ATn[]E4ږDSJЄː ыhOF*-ʗ`;p]MB2bV)\Ue2j(kbjZyWC@IjJW gQY3dZŁe?i^ZCdiRJݗ%yʬp|rrվp\ou L]K閰wx(k)?m9/J3@}ho@ '̹Of4f Ydƣt1ޫ{p^+~X*_0G]f9/ qML4vI{msy{x!]~)U%kJe m`׶NJ)lWxUA@4f^,`A)K}rً8Fy!׳ȶOS< %غ@V=ݿeQdXsEIO@m%GHJEjIaU%,`e:%W"pj@4j$^o%U޵{(rqJU:ީ)^ֳ0og%ǒGN \Kү+`A|2dٔxQܝ1nNݓ8 DK3AQR Um`65UMFq%j);aIM'\=QH<ɖ2c_2Z,j vt8Di%gҜZSf&[Fw)FUKB=>T IU!)=GCE>nvR`jf4n2(n_\C˖̖6MɈfI*MY޹Sa]KfA7MCT*2d |s?nnAiKj6綖XLKx{eɜI$uL+`(a7 ,e&~\yDв)Wg[uzRgIOuo+r4IUVw˭hd "yp)$aȈQ(JhpXoZ+BlK͓ѐٲ=ӃadeK?#N4jLF$願#ئ[ĶkIn9r,K *ui]ˁg~P9i!ݣѭ)]&Yʘ Dzbr5zẆ%4am n|'t[La[=]mj.er+gwM p4tAak궔z7+!idUzmʖF̐:״L"3k=z*9K*H(+uj?Y&C!]*R{"0@WVm'dNВֵ2-!,]~{F"f4^=xg"+r`eJYA.F8GAtp^q9d7e'eu.gdq/Cx3 ae69uTǢ@,3BEK"ɜKl9d]74=dEr%&蒖ʵG!39mRA;f,U*ta/oS^LѐRlx q#ލ^&pa$oD(NyuS=tnZMv!N-Q*.[e X=E9J)Hgۃ?XWJ7\ȡ0o;+WvLTYve@Zv ҃\+ȗ~%G$[I ;Qh-}ewVI33d;WJkg?2ù4lPbqwCe_\P" [l_}@Вw:HMSmHLȴC P4uO m'5B㝻=4n4(} M alػ10u澍PLߧny^ٟGb0 1>(PP o6aw!wFB?܏L݇d(6pPCݣI>ٷQ0/<3OM%u-cM}ǵmRZLL}v3f1(Џ$,T5̄ .14S@:"iLv*y~:_omRB/6o[rd`!X!.أ:DCJ{>]1a~" "v43bqj\ "lfFϠwC9#7xe62M>q]{ )~N )/RSL=w߳~5%`Z>{LJ 'E ,f$fۺ9lF ]?kY(( CeC6doI5ȗqED:PT$-T mT<'?$a_y&4q*Qb fdq<]AdNZt, xf 9#0j^(Gׇ5h]Ex*8V 7 _ 0\od~`((!Xo4["G,1gP4;g /$I|bt($4 MSh^ٻX?x܎[nyT#.Ѡ9y&쉗vCa=L c)B P<;M0؃2;ZU:!U9{u.hKz@4/5Qy}KFwDGP BIٱ 2n-I|`.q}IA`~$(I5##/IsL؏L#E#λO{e~r}`XX_Ylxv_Ppw!s8, iP ;;c;¾\Fosa,h+_M|MCvBg/G|q=!50zP^;/0bzFKnX>>Rn@qc<0PI=ERH^ؑKdYk0HgB`\Huzb#C4B8XX'A I[&H=GpWؘ;(Em^խDGqq%ˡuO*)4F"㪜cxWI>TٗLxu0˦@n:[e'ΊLRU ZVSd F /yS,p +80EśEl[oZi"ώl,K85/LJMj!r+ǧ .iW:kc.(}JPT7I\>ḿT5#?#2(<.e]xT'̏2x=ΊѤmU6]ITx5f%/쏘QusE5%2MeD(JP"SX'KL>ɸgu403>E47:(_B.],ʗן`xy2#N8=B# ~Ө;RL heFГlZ`cIˬ4D`H+J*qD_!?leUnuX.}up;y2#)159a$Y[v . |Iv6TiHդ0"@oC//#mtkd~fbm7;QgD4^.27kZ⁺HS{|džrY0tDGPBrґe}%:+p n5?*4/'q7l\lD8}$VՒ`؞- rrw06k}.0[5įt+jykPkEXx-k:&"5ڊ7UYcm AFfy=AQb]zKcLwn al췬HM#;|C>,f ~ty[6 ^gw|+` h0-I^F{.j:7$'no4OtgÃb\Fwf79E 0$W^\dGßk冘]&ih0q1F!uǏ$K W̊2j:ML7fٙ0'e<0M$sB;6yk٠L#eiTQ4(͛ ʑ)j&b>~X!h\;$`HDBl貟Z^}<4JSI՚PYu)Uf.^].懏Ԇ3a`P?˷lsJ(Qee>UЁV tY+V#))nhFZǛG]=PgE̊R%^0afNg+Xqߔ!~ʫڔƞ Ey(#/grZJJh;HUoe]dK)i XƖwێ* 6 fW$PgfP-uӣڡ|ÊT>g"hPhmxсˤbXᷦVP*(BPy' JtK6}JЩJڭt~]حz)]ܾ}rnӻj?OT[ l!u( >OP[ m!( ?OP[ l!h (h᪠ >O[Ve[ʶ=OPDAArm @/Ey]Ndug9ߙ<;/sأS~.+0^\r1'E!Z1E6&eV fzZ^@|kF=C_3iXk؃lM.f9yUcޟ9y̪ȯy['ĥ္'> yàj1?+h{aϯj⳹gy>^\<$ gU exOs?Y?Dm"47W=Nm5]ϱH=p0:"out:K蜁E$ȑrg(,"hk&wpoD4v Ig"P IdgP !Nyii+-xxq/ӈLβG$ ,?>͏5O ' \jmuo}Жsh<1dx1>UawX,\ute}1J2IC YgVkDb2Y_Hf ˶)ԗ.iVO0*Y9ܛT BʆKf}YN? m׃T2&d ne\ɂE f* k`yP*N2 cqL|*^@gjz]^D= XN XTr,tcWF<I*= ij~#I WY)) ^ l.}Bb0oMӬzJ'*O$[H:f^G{G j""qE2$"9ي W$=]"xwp7y\w=17rݵO}6TDJR/|ϼ}Kk[զn^HN׃wO|SXSج@HtѾ6ob;. oQS*Z =_ZcTDFRftЁg2.kw@E>O4Sh \~[ڠryV@|RGgMH'gzF"жgDB` ~fFŸ0RuOΗF,La4ﱥa(nYJ X AuYݘ -`C 6$e/ YeRv責&u .ڐqSgI"(%leXkz8B JtD]pAvMrZ%Jgd5C G茦-}yoVY$ϑ>ɲwT_kJx+ҳxLz9^xwg\αΩߥ. ^t_qNǙ u,cҟebN~n1(A6~WGArjUQ$eJ[z#AF-VS)N[G+d}pVt3@)jɦ ."%[-skz_0dy}F[ЍBfM;q[IIwjV{M. ''oEY)1.A*4SHtb k15d$zOf C5tIJHt㷫zcYp4p<عQ'^ 5DZЛļ]i!EKi 87IZzc`<~ dAK'1u%Ebu"wtǁVGp,ȹLx\r/`欩MҼ1Q0E҉5>dGu:]i)Np|هcaruz-Ȋ!h(uJ+WK:Ӄ{d, |-@8$Pz!Eiuf1,7'c 3B !D;߭*TA,uQ$16>UexU@Rn,81g#g>S;悰V+ !?o,Ʃ@y}Ȥ CF.ʓG$nyu"_Ys}HvZ+X%~ѴTC&k7&>v'WdtQg >1ȱ5`Tq"XR婨6 -p`西u%= 3;U#}fn\9W%JS? [W} t];87,Yn6!j?(n좈,Y(͒/w_L3 U8Q™Q7Of0phUs2OvjQ2x(StPNX':YS cqs0*-,\Jc߰ @tZ7G' ЃN?97$a{3& M4l6Gʪ\ʾ!OO-h"Q/)[,=mxSΉoMl|ܯ;WȺ[e"s9olyל_و~A$j ;c1q|[Z`kFڏ_ZsVVKblx)s鰫}, ?r&7 9A)N1¢۲ E lwƝK_wnɍsK pBnx#~rܬxAOIOo98 ]naH-,>.zCaޟd'\sqɓBpa2T L'dw.豨OGf_O' 2.n,ō-n7w֗ #i)^O?D8IaPWvݕ#6ݔl'MpYC:}Q5idYłcT5c/p|(e/7<8wF3G=)o&‚WU:WP^+2iNz 7BƹMR8 E rSBJ]#6ޯۖ'GP7Yfe5{SIV޽\8]k5UA!aj׹yS1< EAxQނ,}XpLVSMD*SV :=C$ݣH: Zue'elVabYǤ$9P XG,XGY6?O>5X#0+o?D;b0nk#H:-o7,8j{QTq! z͂<.Kn?Qwd򀎮jD 4.B]0Oe'RުCɹ^d{}Wm5f.q|"iˆCŜ{SpqS DS'xײ)SN^=#!]/t:w){FcQS #a:xM4 7L~^ XdV8yj&4'CX)'T~X"B?yfd|[A^ݥAM퀋*xiJ)ծ7mq/VS^7i?ݲemWOW;lwyIq]m?pFS38 &eՒD "д; 4mLOA\!闏z-?1>nw5ȂJ̦*ouUbӁ!^d zv|&[{raN(!G##&v2]Z5# wѩI;tsr=!gLg]𦠈i=yMnr@eU4cLp2#F,I5(>q^NDPD.s}-5z^my` Ր]P!SHqgy,E~i8v/9Lf \dso/ fK߲EEҋ J$}' aG,7e\Dgqq?sYOm&V4M.YG >4qkYIzE/"sެ=`DF#hȥYHI8Š0koO8+1Dv}TRϻxyt%w+['ƜұJ8m?Yi.b6o=cxX1d1b8yt?/]5WID# 8wu̦xFL e{7W!L6aA9bhs+ᆵEɇ$SE@-KTի{:(1ڧIK+][)G7 D(`+p^:j f[ǺNX*-۟ҭ㳰JY\b Mp81V3Ž =v^"0D(rץ7Ɋ-2DnG06ĕdFv6=/ wo]&ZmLeV2\ۤgSv8܊no Gp /\RSmL歮gx-YaAqu y/% Ud0`Yxrŧ{=dP(Ogp< w:9du.Iea((by;*L!/F`fGET^DƵH%8Jy#%p8Z緋=1gw̵l4=aC  x|/=n <L2%>߁֫h0Dd;r{U>w7F9zRC|OOԡȝ%h:gɜ;>eq/WGp/WA 5O8:Fy<ww?h>Ay.Z$Ct7nXQ;E+%b$UDUbz,gUǜW1r/P([P8@K y}h㱟Aua?X@wme >U`xv5E"WRf9|V7aGzȷ;7'm5e+]uÌ;(|0t)8 .h< snj_a ̍(f WV(xsk)2KaxԎjJ_"NH\DmO-gcZn\4hdE_bW훂S%f iШQҠ$ +}PaBvlQmE=@bL*(Auisbƚ*hFHaI@n4`,iI'K4фГ*D|~r̼]$!VIIO}S9D_Ս6^G&Lfi3wޜ6ch31k4pΚrΛ$ ,>kϛ ƛl &]ٺޤ[t+$Cep3q+sk5&9 z,3Κk; _Y{=0XW\Tlt ű/N'G嵖``wb,zтr|ǖ3;aaǴyotvYzu|wapKF-s?|dLƯ.|2_}ib"D*d؄gM~C%㲖}OT=҂-^M@H_+ @)O#eBNvV% TgM^s\i#Z#;[(B0 uDiZ!„:aY+waS-`0o7=xmb|h=S2)h4Odn&f̜\A}Y-9 KX#B$Q6;,Rq⸠*+4|?+v Oe z`<&U28{fPt; !46͂JۡEY aai@n}iC#i>nq6 /PS7+7-1„Xv[w<ԛzwܹ4]&7I (Oխ`ݠEs$ %˷Hi!E4џ:>jx*X{7L_[DKGE^E;ַo{Y?qzy{ CAbƹ7߃u/_ t3'?A|OLԽrbQG@('`~~aj;p/M(?@u^d. %Ix L~od \H%F[[P\͂܇st/g 0yR wӆ'ئD[Bp:^OTgFedsi(~h=Ffvy[cK!g-5HU`Wӽ.RXe=2?N[GKgKǴܯƭaf5.Lh3kuaZ ߈olRP^Uv~|WapRVnAc_RC)'4E85𗈥qN)*c̘f"jCUi4oKk#VǖjJEl˷^?= Uo1es{zޘ20kJ%6J0%I@@WdA^k T+nR-ю\RČJ4_ X%Oc N:,f 2 _7uܼg s͘ĸvPp$BAObV E.Ȝ|$ cL!l(GMmf`-qlJ@ 0!xJІ4 7yM jo@idԟ?A/l ;&3s"I,(cK@5D&B(IGIPusT"&4rIN!M485Ԧ&a4wցƁm&(]n<:% E 3HDpIlUʉ!Vi,IjF' D26@PP$H,bb0C()D@8NX3B ژqz2ր"d`ѠUO+.8׭KWRAV]/'+NQݡM/v: ^,wp6V!y)e*uWB+vHYWYf2e3ac'i2bG q Wmc˿Bâ5y?--M~h 3%UܸACRe[(J ["gΜ9s Y}k6*)LQb+kumnna&Z~Φ;kbb`gmcL4Kb:KZ%p: TU+%,*$!6myE% h$Ib8e˴KpīTe\iR*4Lll!AJA`شΈK[E9%sř%yc D @ask1h,*6"À1OP`:1IiiΥ1U)Ԧ!Q.O8/jD8}IJݫmUKjG.t:9G[ؗ dXިg#6])R߻RSFkHL?ؗvՎ&h8jNc@# &!K{Q"/ZLKCr#XԦo&LQ,1qw Gƨj&3JtmT5K@+rA+XӠEW c5V1!iD*)q"u#G̛->x$ &7t&e%Uwu P|%0G qv=,wkh%PR]Z#%d¯"\H@b{_+Mhx[j;BK7;2% 6Sv~uK.+Lg' ,Nq$DO\&'NXR}w4G7VdxbNѴm2*4\Xje^n&Ij'1)w2vp6>K3BM&2UO7;iL\}7ؐ{J7n7Z6 F pul b%sd,(l1h~uGxzoӽHHUv41b%~]Z|jqI5 _ _3j/\JN'/h ˍf?>B5]N'UqrW5A}&ڣܥ-|1܆ [H.]iKWlN5Wv|ҧ Æ/ba-1C\C2 ~1i$ff ){\-ZIUϨ `iUWQ'UW5^ŎЊ^K .XN,;6Xa:ɹj!%>i-ܶ|p !Ϫ[IExMĪ6(Gӌo#77 @yK:\ .gV;[ 97Z$]86Si3F{\4Ww|wh RS&O8G")"&i )^KNƿpk򾎲Qnc_ 0=nxr]I-%,I}(|߽Z6 (SJ#J_/lӁ5{]իbsy? Z=0EAóy8dn_( X|#,w|ynK;ϵ?ymzVٶ?|1m}}^on쒱k V.u &XQG%,;_\_dzۂǎ8Q{Qc bvX^ꓷ8L g,?%~m'rIrVI)ŎL;6Cnɋ ?e#4;WR-h+OTQ!XB|'v_rTGb&`jp;gF+uZYAZN$񣣄vǙ>\&~aʱśValA0sJ4X 6Y-ώ9+k)T*,h-*vl#s oO7?xhO!JX{%(A-musBz߅C7i1Ɲy5i bV8Ǣ" jBqx]@c+ϊy3b iJWs Q(@ :b1&aut۷\&1/Z7J00DJ~o}Ɛgoy;cowwn [cWwV Q|l(8%C8 ~A淠cx<^O -عH6ࡋ ƚ8/v9|=%3͞էX?~*?- wӛ=ޑ ǜWX~ᆿ7() KOPpeK,cϚI< r[܊W6 U5}`ߕ)=;웾&uw|^{5|c?X h+ o=9; 0of#9%qtTJ}8,SO #E2Gdc24 ʏ\ukdDVg<{-Ҷ^9I&CH?.q}8Xu Z1Rhi^Itoq*Εn1Ǽk;jz %r}2#5\O7c@ #[5 w"hL?T2[ w #78"O8QCe$Tys l{}]O)²F52;?EiBN5j0.֍-},Kk:.F$W$擃l6޽t;xJ^uʻDVˍ N֢zsFiQWpX ;FÐ_A:FmUnФ )bݵBZjL$wrTFU,dž{jQ)1q,GE+ К%wR+ *jr=!R?Jf[F4ʊIAIZN@i5 ~g)V #zfJW<:/ց#GR9ȡ# v` 7':8*B>PoVSֺrN ޫ;Ұ Xوv|+`~o?`xլJÄn_߇i~4CR(;;o4 ~~tZ՛~#yNh a ̋Eb4WsPD:6e_^+&=&#|*ŏ² Ghd"D iKA]fm풑|8*v-|,OYah=:2w\qa큘ю=jo[VK[D9A"ǽSEf {aX^Yf+oIBGkw9,Coʏ0u2WUQbo&!A3pˤWoۋ"Ö,Xe\!"{.wq܂ Yqa`YP5C* 3/iB&2a$KJ7Wn?..۬G en}]Ŏ ѵABW+%H9OB[e!#ƙKZEWdͰkCF~#Sm[u{Ǎ$`pv}mYa0{v0bwUf"EĪ(Rlа-#gF K.R(ךZ V \%b*i{%}*+ o_f~uS2fK#5[]ܬЦWppQ9dDqIj%0S1k-ݰK2,@cUA7<q  {'A\~@*}7dTڨ'\ } Ey;_n]zoJ6Ee2dJ3*4L1ONU{sLt&hVb-`m#~2Byimm(?J6ErFc\%Bp-bf߿ɟX͟.MB~nˌ} j!6vJ .3UQ9|UF)v,X?Tt~* ơ%'n4|S?g 7Lb#"*VQ_OO 8²8>ʧ =/y0͠fvTT-b{iYH:'1wAl"&EiwƄ<GT:)wpnQ'.A]ȡ"L[]_bqT?-?.Bxݗ{rc;CV1q]tٿ?.s싢d-¶{JӉ&V̈́oK?}ݺZ/mJOLVzN?ԐWgVɶ']"4H8eu^yz?ַ?nٞnoGrd\w|t-} dXkaP9V]q&F3 <<0ru 2ssXV+@v|~DQKv`ƽ12ŷ7OʐUpC5 3![ehS< t[GmlARCWeN/v;+} F+atL`Rc 6έ!9xiME 47HъjĝLj;q967k.8ݖRN{f.DS*':G{5J4.Z ؖmm ᯖqK**npzg]4jwQ1ڪ<F\?Vlzo"5O}5n}uyq7],V?Zv/|l>qCH\Wc蚨&5QM_DM5~q YRHc,hrZhDZj K^v_v7,5xX@B&sM 1A)FQ),N#-,'ӫ6R'X? MB[~8n>w'932.?/<}훔㋢ B1u۳#;ILL_}kfZ3}L_k5%_n qrQǎ0N/:Ǜ| FPƄp!˙JSp" ^H΅G9>bV\eC :_ٗvz6ʵTAD{7q oPq:0 ߿U֧7YU|gsDOrg\gƙu+>Fzqyc,o M{Z8V9l,&:V۩y} f%gwX ^JpД ]̥E-daho]ȣ-rWѸT]ܗ.&-6Q]vcDi^9@2XMW` yfP6k5*\%f?)/?y1QJ G }od'\+OL5!8Z,dU|{ެ./U>>օو$.M9%i5DŔ65!8Z,e9%XV6!Z:8eRiL#NXSEdlR#P3l@fk'R~Y-Xp[mf"F O9cQFe@4jCpXp}%\IWhan=Jkhp' ISMD _x/zN/cۓ| l-W ̌(ZՖ:Z9ni/m>u[mĪHsL`9 Q4)h+_v7 T&ukF=.CN+3(_ÀqmO>,\@G71 eaWT6.@AYmH&p!W%ZcpL)d~tClOj8w9F`רhN  5!8Z,ei{ܼC`V,qmU/)qA(wFraCY.>Z,ͤz>5Pi ;â6Ax^cls2te$hlM1m_[p Řt4 E- x>o=i9iay(RA(8mg>#V~ޤaQ(Sh&W2be2gY) 3PPkACKǥ_}~O -I1 q/rJjdM($J ofm>rTW(UԞ"sBvIf/-94n|(Ryf=ܯ6] `5G'=)M)r݌R!RJͭðJ4m;\g;=.6klk 1񎓘x4Z=xğa"B;j a]`Ė qd9#SvVOBUh%W"xn.Ua;qǞr',ۣ)-ˍ i6NE I+FhDm\>y̾"@N#3lj'*z,ƬY*=B~}yAA51(04B^ r)tgVˇw{hڂPB䫐lB/t\h4-G[ӂ."1p~gPV19Ku:m'H^UƄM=0G>bz6=S4f|@ =ɳY/.#m:n; {ZiW`c".1Hb* m@%><3>%Н%**gF9;D%qVrĢ: _"=3ʍ@gWph7&+E@Cfǿ+R5d}/^u.R.eN`  Jk0\_?> :h̸STfvRx]` +wmMxnrIQ|=yƒo> ȌdNm%H %= Li=wӝ;"^#Ep6ds'_/ ΢Tе۩A04R Y)up..[ qU'li1uG$˔[INpPJTa5ڻe\/xVOzJ z0.# 1Ic4Y4vy O-V@a|h(^#Q0T0."Q'\."Je$t=b/,} ӗ`/#?FȄɕ=2t`}^+ a\D<~ٝN) wB$#} =;xUۭ?0tUøk,&ߦ2V+P`s=>TOO%}uг\~m A߀/:!? U~RfM𕹣>jk +s2 6|Kٶvc ~EoT{_(^\nM$/e2S Nq:8BpA[B5tix8m!?ۋ.~%UUlz:uvR6~'xcB9 ؚhQ%*pp[2-dlv}!(ԃyXA t':wNSK$ZYoҗڅ+[_~Eq+b!JYwj=>l,T,T :5e4ou5(_oGv7Xg<"@fT6Kp^ 542 GyODw\QkMh2>A!bHh1By8£rTgX\ xVUSa䓂/dP[CxtT#r  Ԅ| 2dD%skg䲥m u2mef  O.gqޒM} lX ZY=Lҥ{mzAmX kD>U&߫nؘ)\_,fΜ_Ƞ |RYFPCs Cn}j6DF:q[.MANJȀ\:MpI@An@D>UۊTt^*.ϋ냾J5֠\Ps%Cw #d/7bbΥdBt:9Sm(-R&^]fme a"rzq J]woU-qExZ5խSKB&KVd n\?;ddlOmpl-WTͯ3^{}l9sS;zk.}:ȮA M|N|s'yiFD' riVΧQf 4bh>3OJޞ!u.i @f@FSLuC@,6s!ӤLzK5ʟgDq{!f<[R^.`maqagbGC=>F~J#M7ōedPH6NM1tFw\/_oLp#'KlBh[.uKl"Svl"2r;V(:ڎcv j :'Tu}ܷ .T23og QRřB<_rr Q(T)$\WzL.Qox1׈Βtk`_a}K3 ՑYp:9ߺr㍶6d1maS1lެ@!YW\*$ՔDo ޤ۔v]"9 I:cl롄^̸~\؀݅ "fY١{M@C:Wm- 4AhjƁ!{iUnFAEg`<;vgQPn-Bة”(3p<޳ 9|^f>(d?39P )\8{<w/d_1RؾoC}Py@qZY})^~Cn]';܁ch! !1%ŷƪԤz4` $Qmj/rɜ|4Gmhݚ>ɼ.cӲT9v(.u]4PRhv2=l)?,?(VxbbNJ u#УT:HoFBho>*uAH۳#Ďxx-j%ߜp-uQF^iRD©?4^ ÄwM:6 qd8'lT4]$7Buf'&9ŞFS(*oDcf?Cw3ASUzD?{#ѧD{YFc(< jD>Q'BEmb6OKi 7_^1 "r&YΑ/oH#4~f ȧ4h± ȨaKDc7iU(cR(Qт$cO`qhB4ȧ\zs`b#C޹t Qn 㴲j$O.Uu> *HbCt8 L6RREAi2{ہȧjA>t_pN3\1SwqQM9{r<*OT`<;r(ܖ\ƶNkI2/oGo57# + ćxԵ@*Y֗'/$b@FY\xqME= jm[/IRmƔ{.|vH?!q-H}@pBt7jtr\tX[5&d2Al TQ?FsN(E@^nPP\6PY8>q}> X 4)bD<Fr8C.9JIaMydLהOէ(+ӱL 7qޤ_Irnm[q5D kf347lN4Z|Y4iz6'KSah,cG@^xW*X̂f7@t2*N̔.V:۩xVy 3@a٬~´=)\2a!YGX ycLP@-9݇h-63dɸe_/DҵU%:hڈFZ*('ҮfC ߪd!+(2ɴE($J?"nEKz 8贀/&<7ovW>_ 8T(mj:IBFƧ˩o54F {ϔj|$ fC'jWroi{= kpDSG $X숺L18£rT+[=¦s&L1! d]8"RE.$ rO[~*| UGң+'/a@]&ȧZ;/[ZY2F$ em$Θz+&g>tdQ܌x{Ȩ,_⌖u\MЄf~q LJ (F>QTjA$ɯTJtR;jȫ\CNrZ$ *`Gg N>^Wh܎O<1﫯ܷC^[hr4Nk n;m|E77F"!gE8M^NqۦȦߴ +Kԙ$'kGIS#m 8"5yLVmc6]'Hr-7q*B;ިL|٦{DAfi&U kp<Ӥ/˝/^ ,\[g9/?F҈ҹb $:9 TPpTШlN=6ixZ* g9{UڏS\2vs_a5+ 6O4}T'fpKܾd3޿դ gݬ d J=Lf5-^ZDc.iʹ$ʹB2S3J9  Ƭ ~?efǰ)ƹ.{Ht)jJ9k?mgiEEa5 Ej/ۭ_ne[f2˼q4wVzaOptuWJ&P=5 =ڀ3Q{cVB^)LćVj~j 0yAmE^WBZՄ+OZrEH'$yȬ6}H;; Y9LV! >܍x V*m۪ۤtv6(E Ȭk MoPo;|Cq"T m]N[#P2ڃY9LVN7׺t /oQX^Wxt*K2_J/×(p@F^kE7MGk|Pe5:qfp~ ɿf#!gs\#ӴRmw8Ȭ9❐ Z1Y){5GV5hȵ*X,64YrkC6ϖ;tʱK&{,QbG[;#!Ԫ.gV `[KLՎY_g4Ժ}@FfsŴϹoѷwDeVb) e:-߶>ҚD愔P6hv(;L,XdmAFr uW9Wߠ9A!ʁ55Bj|oat5t5%9hF%22+O?o_t2{ƍ$ app직.dp ܇&ٴ˒"R,߯HZL6 [*v5{&8\  sjؑ w7͓sQi8 uQȀOJQ<ƒ {D29u~Qu0m}"tFE?=*Z_1.q$Jb1szCZC3/ =V& pR3JAZ06HɈ9bd p/MS7k#3!2$i2>rO3v$,Ҭ ˴]nNFI¨ {pL(9L##gRTMQɠ0T[^EwίyqN 5zpLR׋krIzFC. ؾ) mHw@9#sE(12qJYyu؃cdA\W*\6=8 [^Қ\U֚a80;4bv2 @(^Q0@5KLٯI w2靹e k^`#qdg鿨θ#WgX#'6-#ߟ5XFň.T?o]-ׅˏwgt#G- `=8F&bpɴMCC+unl [Jkju؎9kB_9E"nw̘(#gK\T;j&$XJ ,)  5 QqdѝDS 3Hť_UG.!<'d@~$LV4Lr#cb%_g$3!bC\X:60ZNcnŲ12qg;,*E:;I~q %|ow%Ӝď1 cV8S"FGǻ;W] L:)\ 03wd$Vur1b!fSHyjdj .Ώd'1L\Ac(Zj=8%NL_qJyF, "$"yUY'"[#'$UU̧`Y I=8F&%tGB=8F&UCΧcrs<$!SysYƸUT&JN"QG! ^j>YSžduP.x2AOSs" ב.{|Q{bYfZcWF{_]_)fv>K޺iρ@um}5 #K `a%K=vnjEut?lRk#Q#?juz8zKYj8Ε Spؼx~)ʦ5C׬q 77<&Zg-\}ԣ,38*;FhIb ѡPYL2J,#me{k8r$՚(AErh (_Q04ਈ?_]Z7&纗L=̖ kQ}=5S 44wY͸E:Vm8{X[ňX?o4}uLL?[aFlePIt ^>]K>VOqy㡹EPN³fĆO(WD:$jSǰڣpSh."CHf /WWܖ PnoӪjƱ,Rf- nl.8tmcvdU;*=:X&N^+ʴ]愈wa];+p3÷*{-1Uˇٺ.y=ݭePpI4˒PyĀBXEi &BrWп_Uؿ.X~V[g*BBÖ* Y w_Fo"J߲MM@ = !r _;uJgdcI=]Cf@)&PˮJ}QhU.uugv,KW%wl6p|eYC qɒGE{U7++ _# (`+8ʄG3KL9ZCo9ph=)8*h*Z^`bm" k^9ΣziA^Fߢf_/NI'y% uCң UެlvW{wO{߿É$$zX[kظ:әbǒDL궞Hn& m@ K0&) }E|<7,ݮCYŅn_E*UIr_fn+n{bYh6[ޅrՁ;?Kg7yU\2ݡ?Teq_WlfYQ$, 贡rpgg=r;ҿznvٵlV*6[f[}x443r7 gmR e$;o\|6ݯs?bm D9xµ5o~\-^>t{ùVYW-׹gER444r.}/6z^)]u Rr7Vñ5OMէh8Bo0zKY}kYQ%+j@@UF=J]Zc޺ͻ8y`Ƀ/̼N )Zܥl;ͰxL hnh7n;( t[ۈzk-Y<"n*x zVgm kJ:B8*7KWYMz:Ogf{v axf?"|nWf]e.Y8(q|n9.l@}i PdDu21ίKpQeF7eD0fqD+{|ڵ4QJG6r"}{ʠ -XSjTr2P i=<~08vfahɱ:Q^{\NOw{a}B'сk֚/踎qS+#N82sUuVvT(ɳLY<ɂ@P_@lpKg{5=N3 J7Sz?pe>ɍKRĹfVBk<4il/?AR Lm^~vaÌݰǾ. ]Ф;"j)n0%e6٬ W,=}a ǢW<|Mo?NSsEF7w+|M;7OcH0}KDJ`F JyUZSۜV(;pT6|RL`IK9uJBPXX44a$ T"{RisOvb֔ vƀ4řntaxߌ;i!)Z$:61Kʆ&T@01Ŀۂ.>>_L޲iTӘDO畢vMH:pJ˷MۦJ*]\N屔,X V~R< W[&b5LD WӘ␓<쐫o]#Sv>?I?*n\Uj]B8 ?k@*cHQ~rkQUߏoوְkUqKA5>`njޖbN]I*!Q1UȢ؆In]4b A4rml7 o(>bTgq\J;uޗZz ni=.8Mtek NUgV﷡'BW"mSK0'7 $eԖDy̓]Zq[SnƔ.Ϗ{Hcʖ4QC8QsϽpS5E7reFii{7{៝q_y +)N)w`& #ILCFHͨc*׃S(Gbq0spqHTh%_Y_KţἇO` H{(jQ Ư64F|ܸ+2%B0O\+\PWG@G0#c:ֽOi[ *>nGyyG/?zYYibXD>O@s" NGe$UsEJÒ/=KZ.RMcXPu#$YeA")Cep bUh??j?W a1 .'&gJ{8_i_@~A2#IZxVP P: \A i@#C`Jy+m NYJ:`~eDq0h) 1"Rzk.70)֓ћj<m .Փ8TlH#IAX(dR`"Z90dɌ;_!jH u,ÀK+"F !6ܤŒq”`uŅ\fqwSUƏ('Wݼ=DQ#^.l9)ǡv7kU`$)Aa#+@yWNKy#,W|X=-\ #Wq eċ!OP&ٓ Ubs9 tJ 8[:hS.=xz#Gsj$;/Y)U]r:뇏Ŋt5zS8")r<*éW"\N.>jY'K1XPQۛ'3*coqS]|pCK!^Y}VI}Vϓȟ)G1>qnŷ=у?)v!ke/ [yѥs[$;Jp96`B,sFNYdƭ^- (>_AVݾ+b}nPn5%v_/.G.qIN^v8f;i 'Zؖlo' ѯj^~Ak:xqf0\tGkXOn5z;[q %ԄN))~.]/MevevlYa nө/q:_dݔO?]kڨtUC@,W Dϖs:&(>ζ! Ds?Usp A" GDBxz,?nS`mҒGC{ps BY@@\ @%UIcB_cx$*謫O'ׁ 8_!ǨE`)}Bn*1]O7y Q<1C*&H5BqڄAr$jyFzzZzJk!W!Ma)t=hVGӛo+dzh/!op}T] EuxVBvN@C!G\1)*u/]_k^lM/ްEf8{WOw*MQGf+CfY:Nt"}eۼ2 ?pcD,yw|Q XM Ϋ6~PƶN?Єook#1ꃋûe?É?f<$%41H" a ~!`IA-M\D3\^l{*D+Tn8?L\@=@߿N# =lɮ]JjwˇwS?V(_~恏[/ZނOxb1n)&H vauDD\ KP$-Xc{[2aQEx-0ϑ'c9(a7QrQI7g}}&zh7y A(czfl 8 rnF"Os):*0K y86>:O4|4ٱO΁>4Af܉2[MG01GUv/]޶:^԰Rl7k:v:YLwzVT>}=s)UU07k!/O_o^3 rm6E2ŻSě9`7*ڏ#[Moɧ`kjpwK{n8ZPz`#[6d}1Hu  %>QSK*:@(-p Jjd A4` "3k5!0(gJ!gF)892i3T{"3)PTS`3iϯ_1?(U=Gm}֊+Q#cEJ-QӚ^5z&jeGȎ`~ 1Ϳ .3H\֊2btm.~ABdh<=SNLZdŜ^I&FEQpA9r;/u׸;"l-p{\] ; ecay{Uqzcb}+ C;xe&az& }47o0.3$oM[;P~e; I*&0"1KuSHV*msp9arR̥.JQ!%FL;zN@e<(PXLN $岷cĽ6 .\,^z4;uϤxdQtz _|ux&ɱzLrN}t`z\a#kL q}9q(A(8XqHigx"ZБ5z&ݜ}5 [;/03h6 )k0sHH~k$E#kLy-n r/)ݸuFl3ήa)= NY^%YEP&Plglҷ*gVsG8] L”ۃr{ ĐM~MAQZ=ɮO^rULն8|ivB^PO39 w7;؀ Jh`(b@Lb1L4hg>xYSމxh բ L<9@=GtM,9(''94Qj z$BG-;ME1a8 >>vQ{hN=.bR.Jid3q)KDzڛxj>h}*:xb)ܰK6ApA0i){dd3P*C?\'l Eȵ2oD0ݧ?Yhp@JÁk0)H"@yWNK/q`KZZTܛgJt_ Ԋo #y)Xi0di$"2'BKʣʤ=S5yh,# 0Fs0V F$^JoAf#)N4"$:\(ZAoõ8 =tYLv -5Qd Z&";^ P!q9{~ʇ&Kc!\"eCl` J# ABL9#uK@CD3q;5-ԆRpS~ _0 eZoHȥR6Ihl?d=G?Ge0+K;P @fڞ@X_޲wP}5sN BG8iC |M=ˎ/PE)!Oab0ǐ&Vu00Ti:G%1_?|lP=8wDFW;{H7n$St][B.}h4d?K0J\)˻LyE͆t،R`o8 ;^!@|{PcpL)ύCڷm.E-2I8j a]`.ۧ#kLJ;-=KbM vJOq^<%<Vjqo?8#]AecwV۾|x &FI!O[[s9pyX_8d:_Q^^g38G_ɗFs?Ϫp~JNЏ\gYd1KXv0a(#O9t<#OwxTB<jG6[c)W(P q@UZxǑQlm{d&꣐#S>_=Lڠ]g!Db:@&w38*ۍTAϢUX*-)9&NqT*s2c{ZBL,gBSPH(ŁqrYmcLD RoKeH ~bqc7vͶ0u.|J6xq+*?#0@9)%Vj_=?uՑٙqkOIZ /k5.%n Ql0vE-s &$Geb=#uY` ȏym$&JF`s^% IA)gf3?L׽8g@0OGZ$\Z$ylַEBS/Ζylw )v_G/*'H8::'b˜g8f=.rͩ,Wobu.;'syͩ;o9ͯ._h`?JO9m^ѶʊFyt?[wG 4Ꞇuݐnݔq&AGg>~r[t{IZ3W`:YS5:>?64}988=M?GhQtKxⷼ(;GoN^}~:ǷޜcO~89?} wSvXW;~k1]t]]StY.ޤ_k|8[-̤ۥg˫W#?!x8 wgQصZi<q=r[<Æe>?lD67G%pe"DRl9|ov^7j? S֎PKRϕTH4f>uIIp0 0xY9m,E5_EYo b?P=LD67"j& EIlj@`[]dݾL3dOmFo'ltB' \;)6ΡCq=!J[[DB)DȈhS6hc!tCC A5{b΢h1_-n(e}wX#1WI94OB/f9^S+",>)S781H Ot@vDzŀ?ZzN)ą/ZydpC!chiyXi9KSLg4X4u]\q "ij ʕ^'=s?c}]?{!7V0x^@wвi)} hFO/ӯ%G"KPޒ匂Z սqBFDÛo.̢>śm)M,YJuI"AZ3E4NJJd 1iV ݖ0ll_l_`LїjL0)(ne!**j)TC_vK,B<`A3ɬEQevA'*ʢ\Ckfe jS%! O~_9MȹCNQL)Mbft( 8h:Kk|7nOrٮK=r_,+U!!i?2\E<kGɪ_M 5P2WaL1 WǓt#+Bcg$y`` ZUT!Z_F3!UIZNO#<{ыG>z3} 0`!K\ۘi7x>YFf7_R DقR%2:9}" дSLmXpϱޠZo-Fg6M Ǖ>9#BOB`96f)wc$(Qe鎲tGY,Q(Kw-ǝv`Ui#f!h k /ݻmڂ"]i|O˹嵁IY&oqMoe|cn{c w$yKZQk-+7\b[zz%UPӳ$ws_~o&>IXPQIg%[yaH䥜K&e>%Mm]%6*UЩʒiͺ^T<׆K5p.UPݥZbީ6&_+ѻx i2#lJg=5JAʖZ&<&,|lK7luC&['\BϓOߜ8,1𹟘i7S97y.$L%b^Zǻۣ⎗l~k +g4ђHRJ .pKJr6(+J UؤL>eXmeF`xSv\a6}iОXsI 2i[x;M8Go=&ҥmq1쟭 i{Y&M%g\UYIe6^S{չ2ImTm tIjruo0 t 81 Q ?aQt<651D9DŽRn,;rtJ:eX,-A'X+lJD7B"v gd'T;['>= )NߒmvyZHf#%SV]S5D8^ Ff;d,F٧ ZHwe[q^EШu6bChaMf{H,Q:4@`\8RFAXDB2f0Kӄ RMu8!`ja Hd 5bk+[k6v\zX=:/XV)ŁtF[4eZ3Fk$IFk+CE;6! {ġ=)" }RLmf~ {9pΰ:.-~+a۩ 6!HH*_=\!4Z](u8ȏ{,ΖylwE,gML&ʲ;v\Lp> c$8fYE7{ V\0e!LEks*ۣ/ff_+o .'exxs0!J]-34WQ:iK%OV4_3 ߒ?:fZꞆuݐnݔq& ƣгUa^?9할{]LךQ–_1q<˙I8==?GhQtKxⷼ(;GoN^}~:ǷޜcO~89?} w@yO^;~k1]t]]StY.ޤ_k|8[-̤ۥg˫W#?x8 ۰wgQشZiRFJɑ[<Æe>\)E٨~%޳̅"gah#`13k5~6#[9N)Z'vb++i|$.!a@Hca匂BԼqBFDCj-V(4.Rkc&OaR*sL9B*=Bz'%%\2N@K0gYJRð]}ڂ7F_\XBQa(pV͛ fWz-粴 ۫IfV/-[vM;QP_3*~gUbk!z=Mtvس1?oo^7rl%"PR >~?퇕[FCZEM%9NVFjҟ.CTꂰ//xkxX2Eaazzy3)YM\ hfF._-yϚQKY$l/7-4xar4Fq@fgY=/d@Z V$QI@OCNn)LE\;*e_h#Qp?񾜎f<\"w3P׎=w؋ނ_NC ~8R㿽{}ܬzl%VbUXtyN᭞yxQ˃d++".!ɭJ)pl%cS]B5$3BA" eCS3)A*qcHg8Ih!'ӄMv(2M+a -`^buQ7#SPE;VjrxACԃnY8#5IUmo3^!I$BJ`5CU S#VZ A9A-WVq@33FIK ~gf3V}vh_ (& p\o6A0ɵ 7̿BSϘ8P 1gz4‚S ;}m{73rۤt@4!EXCfz_.7ӶzPLPKtFұ-^d¹6FG9DŽRn&x9 Ȣ`;X+\$֦D hEqsV;.fn ߓC~v}-dyw_=S&?3=F("gHX8\0̱b>QG"@ F/pJÃ& ^k@~g~o O#1R Rq1ɕ3jVN*t6Qpnd_>p Ιy:AsF2Fq%k ftt$:B\0\0+ s\0+̢bYU| _^O!Z݅F/WCsA!jiF+W=reŌZ 9vfr9c]Avg'ǖۊ{&=ٓ7o$γ'9on'9{'9{'9{3iq)vN-Z?|7~p^fʲ܍@/ 2-A}7R3@/̌|~?gI79/j}J_=$EE&fJ:Ndzz*FpK1DUJ?JI{m]vy\ҶҸkP؋m}ʍބɸqdǑ.if׬c85-f|C1ąb/G\wܾ? qVKw7kwzG|DA8 &7V]~C4{'CxU߹Q4禯!X7%lnRqQPΤkϒu_(6fZNh54KN4PO&~x94$L=L-L4Vhh[`7^yPGyiit((C6E0:@&U2;O~܀$z:T{$s,+qJN>ߧ GYj |Thi47݂K>vLa'lE*?ofGpeVTd׷sዞ#>woccH곰5(3-rArƶ-gl۾bs r\" ZD@.!1Jh/KYq-@'6"b /`(.6H2oS-tbÃ|C+Q_/i9}q`[G6dw03V|\x"` Ft4B@S(Ụ T'd tpY P~%HwD@4:&PafXTqXD QAyyqcPWa(Vd4z^ª-M_:"3"ŘԨq9gwBʈSj8Iݏ]6M2MM)($QQJ M-q S#rN:H -]PNq]Ay߈rX>%WX+,Р/H" iG5 LUhYsw9Vwy,qW#ָW??՟S!+/5ua;*^ @.S7ڌ=0Xҧk t#3|ll$kf}.@rLA{SD.nŕb.lnb6xsFX[ WE-6`M@ZX.*&W,{{8~8(˫vws׉/E9>)Hrپ_kիyAӈ |4Kv >].P Gq%;`>#w1->=>\UOwŃ7l5Uܘq qnWDdݼ{QIڞM]7h}7ևO0iGˉ=9GPźWd]5cJ?:Ka)#cԥfa06.L(hOxUcqzu7?cן޿{L7۟^3.ݿܦ#'8=ZMJz~ )hOT {Ɨ*Dw=Rp~OtWsj`7wt.A_7I^+K=3Bj1 #&%r!a`a "oNmX꼢/ѿ8tIEA5B]`".H=Y ;<ԉvb{E'ư$Xj$^Ju|K3uZ9ԙ[s5Xs4nrb봶 ,աK_{+ rj<uG%!54:Qkt iځryLua6e8xͩ-[s]\⪅&S"` .,cʠrLJͻXΐ'n`k#9o81q 1b"iv4'NBm{mwLcs"݉!3/?HŴF7)J*r# ofg1ny sZD)Z0la% luq22>/ў)nޙf f7`p)f2֫ߏ3Rcgz=|_JaRn_ TۥD!8Be"ER3^l6Gu(&mFV@ӭhrURy:(*`DŽII͝j˙߮I2MZ9>\yK\ 4A7'tH'Zl6ޛ`'R+]-p$[Y6 "jhE^0,Vrw+W,}r$+Ѥp*(@┰jSu;VFc6[>^ex֫|pw>Fە'@ٳݨ,?M] CZcِϔ M/{ m 2چSNlx|]! 4nvW J_Okf>)C0!B*Caj3jX$"﵌FMFS SHK1}B PNޗkjo?Y,Arkg4d$ÅFm& 0O4Q('MRnsJL{N~P "9 Ϋ/Z]dUF>hh΋7u'Q_?'Cya܄痹5[ht_qe5v#Wv6{kno<=]ԡuZqǗ'ї5y~8\x2r“d.< O“d.<򀞤LЎZFeO`cñ&մԘҽGw`xǗ7Iҋ@Zx>[V67C|t|Fi|Ng|vHJAOO9(locHjXF|0ar-VJ\|\kz;ퟰwxYm&kB})n@╏S\-&x#32x5sf"Ha+RCҵJQMf=˽VUfU,һhd@aD%$@tY$h) Ӂi]IՆf= ȅ4*T0u4H&$ZBx`#6Hcڐ-K,c/Hƴ (QHLd$&A:łKsg H!c5N,iYҲu)i^<{?1J>?ɾC87\YR4+86RVHmڿL j8MfS;>$d~wIE,E*$~7J@ zOA-7dsVn(uS |̦7{޽򻊚a:"Fi@54d*j&LLr uL Kdʄ%YX*CxmJmdmRӔZ>fIM4V3/%ﳴ1Gv r)ĵCWꤲog>jyu/S9XG"mslwGB@ߺH@tv٧3DP}S(`Hwl."!dEпEu]c-]])A~ o6B5ۜ SKج5닟 i ;AId5)ʔHS3rLSFC#tXtIJ ǶzcG*ot=.زaDz:l=^+)ab ->gA^Zݎ[_QYdB35*p,nX/]v3={/s>W^N=λaY $k+ZkkxIB>oMmeT}t6ߔ@׾m=^+Wv;~sƊ/r+M᜵=X95elbJZm{hXI!hkn5:k R)ebrBrMl2zO_Uء#fwH; ouOX0H fPGE4r>^?nF5MmgW}GXEB␋F^yvm q68kl`|" mogb$ AӅWv\dt0=:*_}&bz$Ja7H[U6ÔTgJCzyEdW6EE=Wb08ڀ+veW}z^@/\y2MScSyi ]sH_vhZVnk8 ]w늩/vf/܄4W_qep`Y}ǡ 0}&SN^s1~M"u.rފ J!ߴ/l|C=v8X;=v Tauh[-;WO$?|O820|p& soaGnGi@$¥F?3+:/gw[΍<ɩo/0#5{2m= ֵϓ""2G=G {FN<1 XV]A-R{J}6}:1~4S ~P=_BLY-;^V:ϒ&ndd}o WSO%wŕɸALk6иeONF#;M?P34S "ƇJ:O3%[OI;/bOPu1A Ƽ :b"3V9X%&9$C P4J9 Q_}׶nG婳ZO/vҥ|nO[+ݡݧ8.VkNJޏX_xvx Se=|s||| zNJXX~ejeʞ2 'Z?Z{/sf g31gsh_]0cLqdx DeN/NHʭ{IWǷZEW=H*?돴ڧ*oL1'JQ_r蝫E1\}&ޛ.n:|(Y^g.1l#whТyNh̎Y3\1YI90Ǒe6'l*Uu8DIo}=..+}XQ!v8 6;"%|J4L b[Omrշ]NA&d˝pXz(̜#RcxYYOpcNPpKC~8ّb*_Yې?4$Zv.QRJ28wd[1Vy'!ٗUӱTSFp1TBy$ܩ\Ohnr\ ]^^o. `ǁnrс|ğ01zƫ;z2»!?&+T>0gG͏o7oa$X,s>n4)`l+Ad<|'%pGvi' w"~ۦmP6m3I.<G_=9Λ_BwE`lڵ]=a!]Hؘza2aM&. b`2)n9mcQE_ņDn<9~ջ.o?}ͻW~{7^|!0#@/ock~X[S[odJ'wfzrsc7cpCɗv=v4\>iM$f#؞osO9܈.:aT VhOߚ\1\̖r>/i7M~SL][G~90DI2Rd@( /iűB3̸4 dS^@,wmXȼ4g_;lm4Ȣ E6H/9(N!)=L3_LmHۇ| 4tF+:̀* 6nk 쥝!W;eߍUn͟!=oC_]Ldwv/f]+cGeOTЌL'vEXg  dLyKKq+5[`Xai$Lz^Ρp*66z/3$88l23{742+VļX&]ÍM\g6 ~Ǝ.GMAr.Ϯ&uiYr(@8 P W&W뇷wDdH`1(l=,$O$h b/j|jq` ն&5I%Ŷ{28Zޱ\@ͼ"zD^l]&f4/IvS6s2,bYWͶΞ/'@v0?R ttޙ~t-'ϧ9'~ oԯc?/ƧUAge[mRu_~6-'׎[KH]-6ɬI(a%J(2 3'4*o#D53@uA2 }:]S1%yPM[ 8a+NٽedKLk5) 8H~r)(xrEy#E9~ "D>;"-{sj<*(SxjϬ)2Jg/rx-oI * Q cPF^wcA]*.AGrJפWԋU\U\U##z:h˵Sk TdRkc/&#&&3ۼϓc./cyuFIK%/ʓ7|vw@|M*ͧOBS> 9a^^KUܺ"nЃa}Lݓ #0SOUW5gO(KSs+vN5;"mOATg0NA *@ çzԃx?BCN#]r^[u F.NF>8n5PC_ TrSuwgi=ǴO_Mvm&zgۣeس'% Z>Cnupr4iodŏv;AkK[S#+8ֵ`g=ޱ}#ZxF!bpgYgvaw>!pj}.!Rony:mŪ` e@Jk*;u&x~fQUup\|//7ͧJ%=j  ta_obh}ni8Te_t`>ۮN@jy-$]r +fk7<*{⒱Uu$9[ }:;VϽ<:42e> \DOB PW,j~yr}$lت ;E`d K DC=&?BB |.QAR7.a 65>xC%o,dE,)'H4ds -;Λ|01XmjrrAxa M x&%G$4W}%9Kz jo >0T@Ҋ-NF˙bY#$[O.ҡ93mhvB 1Zܶ-D_-X$^IEhqc$/i=dyk(HDlD,f0M?BB :FI0O3qc$4d,`$Ո'pWys| -;fDkCxʴr2W"bJy K;<ApǸ .ZRRpVQ-"mΩi:iK։Abḿ h56y]ɉX Ǣ6rX#xǍm D2A2S3M\@B֕Ӟ =~XBƵ0Fls5F@>K?BBNfB>4W46xbpWH(Hh^I;鉦`/gt=#n@dT> 1Z[OR) UTDД1Q:/i)`U|#D6`c$ov/XZg"TsD0b:z7~0W<@p&DOd``4'-XLfOل l)J]˒ʤ"1wF5\S&<9A֒#P)Z6Hh^):QQ"% ${ )!Dk r^H\x"5%XH60A#úc$4t /$Oj$d:H;Hh"9kauD ca.q5䌑k푖Q-Aj&$$ Zt: -}6Lە5#aj)c$(k=sIJg KuS3dI9wH]jSE#1OQ!%,G721#_׎jJ7`Zi@b)ܸ e W{G:Y~_ # rRI*"2zg|tdP} KO9Ay/E%9mB+e /WnSϵ{L&E)/נC,I ̅.Ke/3qBQCI$RT7TړJ*uLQJ Rq!dYzD%(9Ԣl`[UBDR@?-(fW1ۢBBi.  0ŌFДK, V@*UoQ^R ѿI':~V7ll8+,BH*1eS ?I㌀Z Y H)%uR 5`[@rǤL&2F"ĨysҮ&[Hp`ZZaT)"Ҟn;ÕX 5y ),D&'^roZiy^5&&"?$<@ "Z7՛kQzҐ,<ʀ"cp`/sE}B/XI{\PuXYOfWYCUM33C z[ wNp +-R EH ?\Jℌ΍ZBϵn L8\/91zCK1)[j=ci0y."N%^[!:T[FDsz]o}8BaX12gfo>%j@]~!H {?nY[JonFRgFOuU|\t|Xh)1vn;K**Nmǵ?,a$]x~]kvk1Q[VX'Ӵ"١EX1 "Pk<GB0b/b xq?jU|Ne"56?]W˧}w'mַџU ʳ府j?ʟK;ܼZ[;9^^\Ex SMrk  Mn߇,摹kĉs&Z-u-LowwSm~gss.}g.ϓYN>t+܍nw|:SV<{))גc{[.55ח<̮ǤMy6|UJFliMr?8N*=Xatb\n"뫼#aHzy8fG({'T P lq'0?9}Mg޼?cܞq ,Wuv܅?}MVMMM+Zcڵ-!iQfw)!@B|]j^Oq?/.tD͛-br6?A1/SZD߫?f;/V,P7b[???%?zք: 뭟ǦG8R_zaĞ$I'IzoR88O#c6z@ٻF#W7Rob$0l,)HCiW 俧z8HCȡHj{}Tu}*1 #&%r#a@`a "+IommyE>_NѾV;]cM.P Z}6S'J#芠O )'RK%qP"DD[4̸–: =Yp:%Cjmf foWIg`ǵ3{B ^sdt Uמզ(߀f5~keX! z#2YMD@0iYn{Ւ.f(>h}b><0k9P&28'{$L𕖌&#o0*SGALk]>s?PlU+kj=*8ZpRz/5}D؟iU? ~+ YK;MA˵N=j 10eH˰[mcHXs ɆЖ6XF%Lgkd.t%y=̧ fB7U zƌƁZ@>OLj9ԧ_,8ՠ k}&K(Sg̃V/7a!ڥZ)oC/y)R1O7a؛@$"?L a*^sx;}6`4uF{QL5-< v%1hm.OK(Dv ?->N|/, v!yE0Xzהr$ҀV{K J PwoצnFzu}oP0vY4a#A2rVk/瘋Jx-)*Qݞn/Ib n&'ft87[AſV𖋒OT btWmR Rq.Tv0mkq9jdPX<ҵgK\t7i 6)Pʍ"VFudXDAXrGkZ H^ 6:ąN q%U #hdkm}1p;dPQLnvNaKA@AnN2"Zp*/ָ~<ؖSˆ^Fc$Ua!R&R/5ep1h#P*tte,y??ÿ\r_;K9_ʹMMo{GH<VYE 62( F刊`) NPL2]3 }<MbӅ%\j2gHJ#O]IkE?jS J-م6>#ɬeBd;\Wmd.WE`0>Ea֑Hk}q}TBNȼ}YOкˎʤ\vC_n([[2(: S띱6 D佖豉hj izgii'zq8?UQ D'Kf3^}UQ9{k{]~ޢCF26\O$U{UL[pGOp'muKg'qj݆~qouV0 bxQ3.uԺ=l͖,}y7e-Zwi{_x{u=/Lɐfhۥ)Pk+kV!uҜ4g˟5mi}ЦXn~x3?#?%SdAs˺qts)]>ٵdֿ$) 3} }KnӢ JтdP#iC쳾>Dt.VccܩѸ.J7϶o vs`=/vkbb^֋ (F BXMUՃx?dz\pR),lm)[N !8g-|^ mU/+C|+?݋ ށCUq=9p0/n?Z9sN>h}md-z'} IMR&,^vGE`.0 $- T)Ѓ-VJ46/7&_=+S>P2zE)1PG Vp*Q[M*Ffd j0DNx]-k)bw4}7\>ٸ~ 3> $S벢 |4 j-$ TH'Kґ0хT]+ a>}GY1 3 rYR Md#6HM&ci@ QzHLdu4(&T k0ZfN˜9퐜vv0,Lg=Gsz8\Ⱦ+vѤwZL.c` ;'}Njɡ뢪u/F`OѸA1]_x%aZ/>L_n]:Ɠb<ws,F3mY7Fmuۯr7D[4K]ME㧧ٞ",eo2ߜ$`"{&CD5)RR(Ŝ(a:2|fU\6NonZ\4]&ւۯ7(݆ذcz̽Uf<w533sEWGfoφ/hyY05t[ՅItۤ^%R}}L'ّK<]7zv˫piŦ1{S\/`G7?rZ|,׷$[[nqntF; <:o@԰Ar$Xwm|;{L!1L̒I<S<.^M;Ba&:kWUL~FـPT=cގ 'F|yNLX;{ ⓧ!_)e$q?RĄ']2򀺂I 4<(U0xa23ŃP}/~*;{emZYiC\S['6R(O9v| j*0BA" ;-NsGNЎ|'TF3Vc2 -qaRF@Y|H?#*3:::R^z)k?-_9p GRb 叫(#˅g\zgv%1`@ŝwܹD'XfQ0Spt0̑8N:y.Zs㻷4%ұc2G8Q҂(58$ r !D (4j*!_a]YBKZ|#7Z!s鷬o[.Ko[.„hKLq;fR;؜(Q\n[m]xS2 a ibE @4ZN8Q;E"!7T6*PE9ˆ^Fcmd!R&R/5eDDL ` Xy$RЯ5dVm lO]M%D,",eo2dM~\,E\4ߙ3pff̘GYgm#3Vf7fgPw-ݕK<]7z<(ʑ |ʝ)b':M_M9~W[Ɛ9_b/`G7ȢrǵXtqoQ-NͽCa6[ǚ̺ၧT_lؠvret혷1ES2QkI<ӆfմ-TZi]vE_agh͂9$-P *pۑWf8+5)Y9AR{8_0Dc)"ǘ; gO"hap*u;sO-l:3h )eu?N˳h>G_.X:˪Xy ť,E$9oI C_;)gͷC3ᭆ(;Ap.PQ@R#S=f0RaTԔ9R ``UHiAw rVb&(0*ViǬQFYJy>%ykY鴌 ],%L9i&}Wh/Gcה>gx{!^iJ;n0^M.vJ)ą7mcd 0( YkAZ`NV1p"RI@| TDe@Ù;"E Rr*S̰4N+0E$ەk"( 0L3}h̪ͬm_:"3"ŘQC sy Op2t&4tY4hjHA!IBVRlu8hjcsa ǚ -]H5xHfni3'GQr x$vT ܐȔ> fNy{C'oϒwV{\&ߦT||W1z /*/M*lpe~ݘ7yz=+.Rhқ="{oELz]`@2LASD.ŵb.LY;;x('p].g@0! ^ɪۚ | >?& I>md|=;]ކ޽Eھrrw2 iD@"]pZ_._(#&73Jk#L[ZSݺ7mLceid4T'sg>qп-{`$ G&U@~7 4n#5 C!6H,?0U`$Gl1wIbqT6:[dӨ kJG-c!x$ ́|<CzK%_/:emTMsY8}s ?q}o?믿?|oA v`)Ƚ^`d /~|ТaTCs o3r dΞ٧Rfn.r+ޏK1O3ogIg q?t\}Ͼ64UXfY @{WG"'GL9?GA倊}M{vYި-ׅ3t}J"eRkO@_rFTH5f!zĤ$[N$ H,L Ad%魭 3)N<*#tEP'ư$Xj$Ju|KNN:'OMOEdTD&qJ7uͷ9~Vt%ػFn$WI_ " ,.ɷ|˒#~,nIZ nIVW*:X)T0O4NXzap-/!"Hy*'t™21+1sS=dCh.TzrN+c='L u:אXFɕ0:"F \5D()J47J-Hɧoڠ ʠU,J˻X] |G %3 !V(VDt~w}o{mt=`<zqOs*@w7f=aW{5e?R>GZ+;y_BYeK2liwX{w$`P/ Ivg`Xy IL"%. ̹kzƹ7HoՇ!"]1%E,/=:E|wC}^):S$z"dHP%ؔ9WI}Lm}N6EIO%g*RhSg*h'9n Iu[Dhc"1MĈ=Qp[rHY"!(@mrրE'cc_gG_&_@)ue>;nۯۋ=5FR1jReXۚ)zfƏ2*Ƿn0j%Zk8nn]yk3^2ZOpSxKW+"eݺzhlJ[iM}]Ԅ?yf~nncQ/w Cۮ2_\ϊ}MC? r6Fߞ/0eOX5C+N/B?meH@VJ-/9T(b)}pB%ƃKx%/ Q4eJDCC'^B]nq{ d 4zN]-i OSV mYvXaך|ۻI]F6bc}og/MAHaH\-x(RBLǒG zeRK d[I#JutA(i>uA=I'BJ*%Q(MPF9Ƞv\:LDȿV=IxQ\(BCACZi!,  ZE$!$tZ`l.sB)jKH"+L Z#F#1' ЅJis,9d GԩRg %_ 'WJIM*2\P.EKPU7!s R8bD- H8ǫLYvL u'%!{ĭLk@K\&$+INzkBR|4* T/Ix"ᆄ) P-u0f4?X,߂J s_ q9"(+P!ąD'J-\t }vq|bf.n"mG*TLHL%C CJ\nӝ,پ˺Ni* ;YlX.ղ}gn?>$|.Qػ3[̶>:Zy)^-?M:G1 _K{E %Zඔ0hЃ9!*d ZQa$9mV$0e[3I 1vSpDO }@ 4 &(dRXfA1x`rj}Y:vp:q2>DEO-aOeUhcYY(o^L7r'9#.&q!bh@F1Odс0eczjl/oވA̦2g&[©zeR ]r}s$F  sf 3FN=xgq/=_2+ĵE=xPLfWi5N{pPǨz"W{~}=X:X:vy9 n3Ĝ9v;|zP%O=󙵫Ѧ/awlUYersrs0Sqmd]@;[4Ev88̏kqE [l?2>|"e<, g9/Jx)`^K.9w.b\z$נDJ}8{'}Bk>n joU{xX) B+5AmU#G;M'N:A蚥c^ WH)P4m:X Q#y9ȷ^P\H(Z3+a?H(T`Z+ҶVH[㳾$ڃDkYJ{Hu$HE9 #PXHҥ]Y0 D-J.g]0x+} i}QEsN6bY+# J*-uT HӠ{~*tLA$sq26!(AD`֏<'#3gYؚy1 &{w&.jݣo7y:dQ'Bm$4G8C&ʥ vJ@ p6Eb6NRi68J᭶&H]LT)ƄvmH؎%lGq\7X^|uv,dY !͋eYEmcŽ$XU&%!Z(UrO(lyH+N.1!'b^2@ޫGylxZe N$UD*-)9V'4<"f #JSps5Q9N3΃ UH#^9w9wĦ"0 }(M;ϝ gGx'rymq9 T7xGw.`ƥč!JJ A"ιȈiBR!(= B|Bw\tqM1M7YἵWF6:d[-sEmN&GHٻ6lW\T˩?Q;0>4.Ax h^S|mMͦZ@O=mm>rüG=lUn@r}?\n^{q7,f_e#_זr>!U*Ih_Zh9ČrP Axo 'qO_G/LG`#&1Qh/T& 1J1`HJYHzmoLU5|?DځI -e"(q+"9,$ZilYVFϬTug]?h#\Ѵ9\U`z6(F^ղ9E_iP S]+yF~vU*TN8S%Hs* 6 4iȍ0:"F T54r"D锨ăCܠ%S(En^X䦑@枎:;f?D6yCWKtKl Ӡ>J@[0uP]D*ιPEؐ\֦*MeB1^U-k Rp% ᢲ-Hk?HKn䯊T pDO 7 lYΕAs)'Sj94iS6)5R[/*.Nr` IDhc"1MĈ3QnDB0*QY-<-- &-鏇wudQCH̤3/7w=Nzfoc-o˰nķ123ǩ0T o?f\-pek6雛WW 6̺ң\Sx[W:"滫-ܮEcn>ĸqo]F\O"LxgN~ vvsaiAR;v/<9ivQ]mJ_.7kH~ςVl=';mW+AbW߾`VH*"U6XIyPLbo:gI'nh;Cq֧4_m?̀U6V?cPx[Wzi (*kg^'vQcp_=!kRå)% '#%ON0hfMe:B@a9wJdd|Is_!Z橏ok T:RdREbk̢b VU!sF'/uh{Ywe9p[Оb DK #( !ZlLzu ENGժdZs$7H6qPDk먐cƠSbTrKmK֖Y2G){Qse)dҡ6{0O_~O9<* ح<0f60ceKpb&&10L>&n&[qV[DmpXhx'P@󨐙B$^4LbE Iт68X7cFS˧#85I5D%U'Cc  N%5)Z<#{YQa;K Dތ3"p_Ⰲ^_\ >GY*𧓞80%md$sa=76p*)cP IQ9"<ŒUB;Y(rM l\kscjRU~z{zT @YϙE9/~q,1M]RB%񒘤i`e!X6) ""ѭƋ }y9e[c-Ч@}}6LURJWI*)_%嫤|URJW +)_%嫤|" KWI*)_E UxXID))_ URJQTiH*+!.Ʃ^ʊ9,j]Qm-┕ V^-(z"d "zUR.WLHUL 9h @M;kU=/g3zy@A-M1nlԨ)p|v_5B8|N{ި ::(K9}_p(_+M$hy8)W4"pT~2P;a$Y-|d"Y/O:p$JJv-ĥb"T4385IaMAb% 3Z[fp2qM(ihRD/6̅Jse샛_ӌwzR Ć JR@4-Y'2踀`0yczjlaVwވA3}hhYeC+" ^YBKV#7o 1 h RB79h<`5T9r9 PtŞ@TNCWY ּ5tWڿlw p|:}\g~_re;2`)k m8.DxP2#@'"5e;OCI}R\uOҜ͞Lڵ~c -Zt 9Hٙ)Ȩ 2~YϞ[2B/7S2"<լwfiX^ke g>™1 c1W,¡ |ug*LLq*<񀖗SIoP;YnZԓTj۰6/oEmېq7=/T"Iʭ{ozq];vH"i4K -Ǟ[3LO'x2 u^D褠碯ŰIzsV(oEʉ%I ѿ_vQd`07rxNs8(8r \Az$hDJ%pL>5Ƿ7R58B¦U]qeR1CKBҴj?qxkk!^kr-*QM*!TZ(UI% me :;@tvpLu]FU~gCݏSx!Y\ uHhUBJK &>CqKh*4( mD -KC(ws2!wtf,W٫iutc}w俹7I~ο5:mt _&xA.:ӼOK&bgwAh`+ю |5BށȎ)PN 9Psݝ(ObTk$RUQ P(OTݢJi|ϧ38/RyC}tWsck!RQ~[fdd aDNVwᔳ>8ѹƿnQO\v${Yzݝň֮_'W>ߎ/-1p=_̵^r킓Bq7^iƙ@<>u4iFx4 6Oy{wٽYY?!fmxV$:|Q>X3 $C u8'3<ɚA%\"9ُ?䟟ޟ>2}~ϳOߣw`=*%/- `~yԲq\S hͼGn!gژʭϝ_n#c fO>pךhw3:a\6um)?^OtuEfAh.GA}z&|ekz}&:DIhGM>(cC{[ţ^rL0b b.8DYHzmoLU5|?DځU% DB[~@`qxQœOI[-M9iPZgS1n<SSl/9ڙF(]N{MN㗘\9~\UY+A/2gG>a0EmMr*RHxb*'cs*RC%=cHքlKOi$Z%G>1QFGH>X_xFA<x`x Oξi2K(rΘ].cgS^|Ju^~geuB59t\>ozw{?o4ymnT# /aWs f s}<)} ׫-ǼIz!NHeC1-B']׫е}{^M|Sߜu xÛ^AMPzEV'2J@fVDk! 0_iAuۡ$ksN;8LFUR R)9B!T~!\=..ӆkL$8JPL4qUts㈷7mn%~'d*shQƘerJ{ͅSgʚ9_jhVV밤}z@8@ '4]#3ˬ<2(4aMڠsL(QmGë:2,"pDqGkZ ј;{Ele ]>Tq6!Cݤp;Í҉⌯jAsAAL-)!q7p4XK{n2H93P@w`J*%(Wg?~Z)c f(QYD$4 X(uH{GxVyj'΋JmLG\f|WГn/&j~tV)Q7q :-\P[]2rEE ԡh=X UNi"RŭE`(30J2:ڀmTL0XJBZ298psJx<N(W_A{\ST,3j,9LK#'JwI< &uD%Ԗ we] dT)wߧ4֫8Esi:%(~ǯcden"w}x|YS?PpHpA`q,aLJDTKYS;) 5Qj#c- Na X1"$˴(ʼNd&#QHYu&LwTm#`ZFL&Z'%"/pZkr$^"gɌ>z^챮12ښQOpKݿaUokτcvK)ܾhZi{K 7FWLBKf}R u1aͥ+gSnfkWY%ƭk}c5ag=/L̾3oAAlyiCM[:nďr\lPsҜMִm֩U=@Q%9-I-KͭO3d4́ƇKiboAT-Ͽ#L_oBQEMUڠ{W J442z*S2D!%6  z݅MCyMziG/ۀn_.R0/!KPքR`bkTj@ƒJ' оuolǟmǏ=|&UJe>}T׽Z_Zm֣H|Yw \?Ҧ^d&R*, ӔQQ◟?H!"-&]`-@F`/䛋JByU<֋7<ն&%=$6ۊgڮ7Ul Ji At}{ ?sq\%.M1whѫ=ø4u^w:YaF8ejl6ɶ%}yV#6mo VɢH˘j PhdaT[3j ]{WolT/۸Lkޱ+;7cӎ"gqv8oKmkޱzֆ7UF6l]"\V/\g CӴ:k3*63U1bhSɾ|:I㙾D9=ǧ^C։ 4ԩڗg]?+vUa nÄ7uuj㶚/ﭓs9:xY)uFbıfk5>#3NVP[fMg'Jd27 "碎o/ǽY kk%i~V>%znjWVZ8fѢD*1-Z 0eʘdʷ3Ka7V6oX`0jy8k5+\"$cw]ݬdβm˜t%t%\"fZαKd28!6/\km3+$P\QM-W쾠bm Aڼ 1ȒJ#i<,0p? O};Xʌ:yAVe=Cҙ2/nn1R";01 X`b+s RRuK <[T3#Ʊ1t0? +d21@]̬|/>'q}p76KAK'AzQ;|U資94/>yYvyJMng"Htyhɤ4}8¸d`+1iQh +)Q[esk?./ˎp$msr騰WL|*VҎH4 iR XNi P樅%c=9I =v_ͦeo~Rf!ة'R;҈J5.Y#Rϟ~0%2 ! U @-|>|t{.\B-}FЧHWA(F&< It) i&*ȬQ(֢t#൑b'8>jK ^Ȍ ^`A Ɯ- gd܌RN" QƮPwU'ӻd+\=aKM_7(M?o>D`I53 j-$I45N0+D{ACJGt`ZD"rRɖ-Z[D98,rUп0 |@N@=rAiL{fx7cZadg(@z ",iP'LR#1s[,8㴎i^u+ ѹÜ1Ub|eYUsOwNa4*4LKr7/ bR'ĭ(WYLʷPM++&_Iߖůs}0tdS.W~Zޏf39 _K`(NbJSwGG *wVѮ$fX;w, )axP6 ,R<>%fhn@Ŝ(arGqfvp2q.r(ܯIE4/g-~Pp^JVYcc d=2>(Dy DA!L3o(x!8MǼYAooDifS) k\lIQ c)8OP-g7ZH7@ YRtrs3Πp)>SVX9#vRP>eO,ob'7*ү: ĺ5Cf>L_jk }_ $oƠ/".0bQ`Y+냉KM-a$EV E`/׫vᚑOg7C:ЬtOK43z|"^; swx]:ʛAQp̛p6y'jg$  kMqJ`uN=ίj ZM$!8&")*y;{wD.6wsIH6^{reYgZs4\/]g4#/Z2 dtʯWFPɼR EAy9oQ+rw"[v=G`e^[kQv\x" F`¨)sR ``A*$ Bp$\؄ ?$Qc(,<ykYfm8-}t~FDȔsFs%u5IM׿"u%$B6RaͶ<促we(n@\Hz6F#6d>Jo('"t $/:|P$Z|8szGDJNcRYeQfi%1%WX+, {$ണPD|U W9 |~LTָOɪhu22.|sU[]ۛCgJ0F/a/[˵& WpezU?7fa'?gVEcCH1<"XR$kz.E ?)"#tq- 3[LA^.g@0! *ɪl |֭~m9(( iah|\Y.?bd*!RWWѓ%aL#)µj|jBǝI%ƿ>¨:AX $b` :%(Ræc1&)b-x9mXWwޫ߻n\O/[fO'j>[?uT?킒7 OL|x|qa!4ƍ#TFm4 Cmfq|a8`Zt/.'GcyC6QdX3Is>] Q%q/poT[1NE:'~;=׿`Gxo?~1Q经~z '_`&U)ȝ;M&dEaTCs f\}J>=O%ȭ?z~;K>_f Nf*?$S=?LMfY ܭX"$CL9#2@EI'C=b,Mo\d>c!ke2y' /9#i *=bR-' ' & :!چ+*z$e" 19 Z*!b]4D  aƠ;NISe/"2?/;v1L=u(k#9V:/UqT[4,r೫G}ӫÛӰӤh<4wpc҈s5IHCs"5L3/gyoXo-˶ A#A=VZ+jJ[d]gɞ^CW{r/H58u^U|ay/>wZKiZ\ʼ~Ji7M!l+%!ktyX+&AKE$"_hY"M"PdR: í[NN鱓I:r'rX8Fm'OT1Xs_h<. (knD1N*NQnH+Nc. D9DŽRn&x9 ֑aY#;X+\$J"RHDcN٦ 8tġOj|I!kYD8{_dj1{L:; u<-=%?9i̎ {8󤬐߼bJ Y,2` )W%ɛWNhOIB HE@60bdE@cʈ"($-۩(RuENa >|_F"m<8.5.4Lf:XO7{z~~uopB9'߇K/$# #U&9P+R[0EEfwֶu4 iϵ؃.m…mׅ"H_B I1\QfS(8$ 8T0a] T* 켏:S> {$L OJ0yc- Na X1!$ &GQ-,-C0!AZUVaXL"^ˈiDk45Dd]6G5o<*B!5_X te7)L쭭u~aIФ.p[ivfLqznzz zt7 Vtz$7NLz*kqguF0 r1aF] h]=j]ԭ7w9,t:o`an^ͪChY@b[wiyx{SeZnɁ6gnG.oxqDx O! |kִmjmZz|ׂGͥ& 2MqC~A^(R`ٔ+߭p}'bvew~B)Ѕmsr騰WL||QiGa[KT% ~ }i#m|𢄝yMo%ai~7v_͆Z'9Gs?w6f:3LvɪEo^KGpxol[ghLJL>kTrO6'3WRԮ^oj1p_ gAg@gepkr<ٮ^fEjX&;S\+p>]*]zS2 a ibE -ipxFNb n3 Yeia9N`.0+_gNroDuB2ul'p5tWP2>yaEذpo ?+CV} o#O$UJ;Q$D_Sa(LpaUY9P2zEa#XOp)Q-&x#32x5sf`.7g6q3J9 /̶3^ u ϊ.MZr-!/hZu Oi̱wkR4? JyI=t$LDt!"'UnPlv4 plTv`sAsVq,І l DX.#ȁ19tTOghD4 #$$A(pFBb%#1)DXҠ0IXk0tQZGi=E0/&8~V0w1_PpeKj/Rc6(SX8S)!6&.P`!.K S$\,C[IĪ &Ulx&kW#OEU6/-rX4PSB[&+؃tzpёH/o\Z<ȺAhZ($C K)FDoAs߁Y.<;hW3Oqw.j%' 3YLl^,Yy]!"}X{))5)9 Q 6xxtb%i_  9 0ml3'ou|`Y Q'6qQk$qH,ʵC4ko❢oD>M%4487-6ْcpP쒔vͱM,,嫿K^߁(~Ty3cOda0f9*%1˅FsvӤg cS ) c#vR'U'txDNx4J7(ĺWJVdbT|rToA ř}4;&^56鏗OJbIeHH%/{@FeE {_(.ePd.$7͑ ''ONΏ]*-n0UB3u\ym5H  h02%Zd-FEM8E kxB"HK r8u+ $6!`2 2jveZF^jFe`Mk|P1Q9R"]]G+| 3E6^Sq$I?ZS6;UYͅ7mcd  0v @GbDn9$0^ ܗ C%>Z;"E Rr*%̰4N+0E$u'1cR0L3ƾ>}4b&6_:"3"Řبq9gwBʈSj8IbG%tt !$5TbCASK#8^ q҅Tԋt^%rx}8JVX`{$ണPD|p*Ązb,% 5&i5^- ߽&șA2KX|9Ty٨RI5z{@Ei׽~л6Co&j b)Ftួ|[$간c.A ?)"wJ J1u,í|('o/@{J`!][o#+~Kz%@L6ٗbKVF~%-mʒIMl6XxtT@t˜Dwdx6;?vRj1vnrk*DO:]]om0g?^eoM~0(GIL\'.3=o_6iQl^._|Zf;cfՙ+m0J|l۵MxMV⣋O8V;џ!]-YْOt jd6h'Ue܈MKQާ]S;},>R܋Okjm_~q2Brt쯻_5"d(O(WeaQE۫>앢ks/\Vk)Zn|ߵ'Cvd>iMo6" 4?a#)Zq)Dmmdh/9HWFKHG/#JCIo۰yM+>ߏqҒFF|X2F?*~F<Ad a>$a+f[:E:GVlN0hLla{kgu {ÿs&)ya:т> kBB/ms)X#mԈ@l㥷Mb2S32!!g9^SC5GEor:PBKk1RNo< ,AD@NJx4$h }x l S8ctr gw7jygK } %+ Zl8ȭ>\ke4:4fG{<㞷Z'0{遮 S vIo 1CoDdG7fz#)SeGŰWw g5aloP/x+rboNM%8`mԒk'Z&Ü˄W 8l~Ur)aetGiRZ!a !FaB8BHkO2NBZ66B%ى|qn= N ~D؉L!CYiˣW֖(M `1dGOCG&7Wi*q#'ӄ`Ru{)3&'PɀZ3&h+,ź[Y@',uX];Yæh--vE,dYN-V7Sz#&aV.I6!i9NQe|~|MH{dCs9  AW#EL 6Ph"zetGᄌ DSV22<LT*MH{yp nv1fbytةܦ R7ɽc Q*=]V?GsT?GsT?GsT?GsT?GsT?G,_lY*A~jBV?GsT?GsT?GsT?GEAZJj*ah% 0VJZY*ah00NJZ C+ah% 0VJZ C+ah% 0VJZ C+ah% 0/ ~ 哊JZc+ah% 0VJL3UJZ C+ah% RRQc5 EdX3VJZ C+ah% Qz];M0kEJd^ mi8Ϊ0bK10!tE)9V:ylyu#{c}p16NNl_o}H@HK(#o] t8E %&nY7FޱnqA#Q.wI@`rR"C)!M>L*m(jN`8,Nd< ]6EVlM ow=2j2YLfl?日7;,p/`G7A0I@.-L%ʬbN0p5@CkxN!>Á1g嶧28GHsM7ddSq)5s*7UnRn |98qX쑰n|`΢%vQΪhmv\\JY e]%C,XU(X(7L<˛ SǔzXvRSZRKe,ږqYWǫ;&-` jgXbaqNTc.u훥 sNz2]k8Z<1@zXUeb?]ԘRi`sXVS*-ֹ%IE_/E_agh }HZ@y7(jZ=B'FryIˉBQ£rX!(S(J(!.1 T0L õL g30$avs},im͍SOX6:3,s8o B2Q<E,f) v@X=emt_d"`c :gGtɱKr<4/K RsJlNux̴PqySF9!ŤX x2H|_GnxzR, 3BTN"I^-!1@XN /҈/0XwzGzDG&9}9 [ۗeɹҏJu浟Ќo&cty5MlaF͗eɛrztEyQ\UsJRrH2&2G['(%^>*4s(䖓-'.UΣ 6hALDY') N%Jp$9RQ's(}k+!Cʙ#q8Sn,FdMX2ZC!$ v:mNO9k;.행cH@z%ٶ2#HԮL U(aQjh@ʙpVD*PL12>'LW 8D̒hwDpf,$$P 1MRvI@F 9v&)a!"0G& @:W}UeQ9͉:.cx7@!*e1JS )H\p(CP}r;"ط 49Zx"/žsfhT\Siv4\#DF-2WmU?{Ƒl_.nFwW 0.rmo , F?%l+o)ZCQH6kOURQc 7 yvL z5Xn,ԗ!&gsZq&z@",J3(T?Zzel'NZi!"k+6w[;%=B$z,7qDf!1V+\Rԑꘔ q_(s B /;uhPp 1PKwQ 3&(] ^*YoFC֗I}rm.̞ۙmlΚ!"=]ح :zt3&,Z!CuSo]s?ެ͛RbPH$ݔ8^~%{7FNsZkv:#lਉ6X+༰Uf~T^R _bZ4dp&K^Fnm2찘{N_hyQ84_,={Ȓ*~*~*~*~ǚ#%N]QGSvN]ة ;ua.ԅo Uة ;ua.ԅSv²[Xv ;ua.zS" ;ua..S,Yة ;uQN]ة ;ua.\a.ԅSvN]ة ;uAQSvN]ة ;ua.ԅ/B5USvN]ة ;ua.ԅhna.zVة ;ua.ԅSvN]ة ;ua.ԅSm+ԅSvN]ة ;u 6-؄[Ɍ[+k'vix>9d'!~9::˹q[K6?h6G}ppkԗ'kQGsNjz@{iĀM7TjA !Kz\JINژ$J|!=#:j'\cbZiSҽunh+t}c>,{g"'uo}pz>|9B0+zBkQ 3AL\b^X[ƋQSI}F3gA$OYg ?#3$Da9D\"l Q DDa'&x&{s$̙~feLrqmn`P۫<.onПӑGA4#$B m$GC&$&P'U #Kӄ`Tc\XI.R"-՚1G6$O>;wk8h4M]gk/Vz)Zr]Q,g,ڿe.Ÿܞ&:~].=$kF8'EI 8&mB{5Ы L)ڋ:E{PV!TEv'bE{"\Qx5Ǹ ָ HFb@;g[wv߮9Ç `6ص ǐ⾐cSe+|rZQ9 .0)x𪦄ehY)Qo~)ۋ"X&t(ZjZׂ$%Z[ֿGtL~)]gџױ s/ "TAA\ y(-"%Ǖdq%ӆ) DE%_M\å)c$pU/˨Bdh]І`H}՝lr~@6{ܨ^{@pGs-%aQQbJqEF,w&$rʫ5C`k?~x5Aȵf>ؕULȴLq8CSϯ-E(UG'$:QNjO4ۻ!k:ӏYb3ݧT}X#b,Kw|K."m^|׮B''+ a#luw*M.zhPI㿯p8#?`TCucLլxv1Uٕ _M?Yf_wPaٙ eФl6ܬvIT5WyoNߠ-'wׂ lI[ju5nf]Xޣ҆، g0bGz1ѓU{p*#[] Վq0V'4#.t$7,Q|2\$VTej4;g*lTL6l.W~py'\>{wOw?Rw? LMᨳ z.}Gj7GӪEMKK;Z}XOkrm_|߄6׋|O>pZ횳3pO6Ek)?_6{.!>䃘rTB|Is&lnBǍTp#)plAKj%xŕ6PS Bk悓 #NCl+V6,ּq\<^hK&DžAy a'axB 3a+:/0^o2${΋O 1t *KRH{0 ~/W^4> (@ZZG*R-< Vԉ!9D{1$[B[v{t CtLz=<;̧J"b$ <a !u:%*E!TIn$X;*wA(r}X]m>_b)t5 *bO)Yl)[gBl!QUݼ']ͭϞoy3t`< w?iz97xaP 6ܹ'ݸg~2(|qg6ӥH!ęt }_@=@A0 NO(9)[NdE=_SH-w]du΅[-E0aeAz! 0ݳǓ? Y(DHODz^*!w\Au{" ֶu\}+- dvĐ 4N/>^-pH9w5Zݪ66cl aakªQ ?#@+D XQ .SmI + )wEd EʍUPU?ANV 9>.,|H۹nqϨ=Ufު/2+ytB]H..5p]̩rdh:e+mSJV$(<9$x`kʉ6; jFAa9!U(p,ie(f\r Nk2c0;wh<IrǛ' 79y`6uG69 G/ruUn8O,QBT`2B* -#! dP0BI ^WF:#99f sffq, ?{Vpl^L/02݃y ,g,94}G_,$u) ޤs=c<׳j\_/|UV} ?ܥ K ߧ8lG cez23JbRs)BJA9ʩ3.X> &CRI 5 MtY8Θ2Cck/;\LPv58n ʊ9 VQ6L@L H~HE= )MsT&_TX؝-gDJ]ֆh@ZH:ind6 0Xk`l`!!!mH{(*nSy? {]J+VS&VP=muCP=muCP=muCP-7MImnJMI)6%٦$۔dӪϴr4jZMSij5MqcmSiu!MAMӴi6MӦi4Mi4mMB5mMӴi6MMi4mMӴi=o,*DM'aHgkEv;!3+ApÓ>;+0ۚΞSGvg/}<#*V#.FIB10"6X{JZw6( z@?s^1ŗL=X*=Îbf* 7|44W8$&ֈzK̢ˇTALf Yv*fR*%{AdBX0"fч7s#4CnGl }sn8;`6Ep:'NO㳒.=/JTBfyUy4B Mq[(U !^jDNgZ$w2ΪugC 7ԋuJws"Ę yWx,3 #P&"0n9SȁkIhB@M|}+p=?f.9Z\`*AIaAs@!FQ-Y˓{3Kbߤ$_=)٬Qk8H )ς:G1M/SR&Y"L"a^NgF/3(b"FLF&襘2IB̂,(*#hƬ ތ2_pjn+sWyטXBB]i㙷pϘ*BU:B^ @-ÐZO4c[{i7Thס%e2EƁ3akbcN'!GccG{bG<ѭ/W'T G)+_;4eL)99jh((ARhCD{4[bvְ(<}6ĩl d1e,X+wIt Zy3#Q{-l];#e[ EgMvˍ6Snow9T|CN2 ,h4JR$5F ɧ D/Y:B*]7,>M4 ܼ呙 vbmo8^ǜGe|Fd9g+Ϩ\ꡑ31s4bh*Uom M1q789)ei{{/7-f\NhbM?K^7̿AޟldEDl3, 1,q\8-4ZJrnUًblk?Ѿ.m:x_uJVj_^lXj/ڨm%&g5e88ԩ*d库YԨ Kr?uYr&V=çάd\=hXϩb5YGz >i0لRahm2xǓ .BL霉e-8ƵY# A:Dn.CgiF~ˇ4*:iift_ (&Si{Gォ6ϿN>O.M8V ܷ^>X}n51iŰN)nݭ9ʤo0Ĺnve\CZz1vItEGsFA Uczom)Mb 5i5VcgoVǩn[kCiT{TZp)U*_H u!OԘNHy؁μS./ F~Py spp~V_+ft2B䍄^PD V*˙>{qQIT )G|,+j:2p!SR(K o_OqR-t'bbrF4N1=RYb^qŤ˓I SLgFIKzC![$'<7FZR^3G8V~g_^\цk_-{\=}F&h2K0,{WLl:>_@ej$ZU'@.H'UPȤ[ RN!;Y#Y}BF)_TGoJ@ b֢UB(%kAH <(UvJ)ϭL@ug!/\ɔsveI o|{o^^8Ƣ3/_9)kn4]M`C$=Ukhۿ3k{0enY1*c:`0{f aҺMJ'|U@ܙ)%wk[v<-S5=m+ 5h s$`C'4HZ 3˶(O C !oJI(g(hѱvl`"8 ; }H)R^\^*=s?rG/pۇ^Q]=r%JY"br5t-gpbҜ9FBI0/lbcIG0GaB)Q;lf5{`JQ1JfculPi t%\K ge*x\f5NxLn4{g5T;d; iջǃS**ލEǥwO"}h1Uܜ v,'F?|q> $|I$הDLLL}aG'FK%4Qnqm,y X >12ȌMwWkVݍiS?<9T,(Mf돫)<з~s>{'m^x^޽_ Lb#.jrMY‘&.(NRHiTlѭϴ{4L1үT?W~/.^pNUgg<>;9tc{ch4r57~ [ 8M `Kn|1Ԍ`r+6Eh((~1_ͳ `UַC|V2:iݪiYHK_/.}2{Hʏ_~?9bW~ʂuԜD1yonL}zW߰=< cY0?2?hNq%J |uOY1碠| IYRwqW i՘~M$0zZ(,9s3&)mq+ [UqNUjaWfWJ#]jj+$my!|~ykOc`ҏįDnjU:k ^B\zRN(|ʁa;gxQO%gH=@Ӌh v-ƈXl'X<3]fze`:dh{C}_|>7޸|\{:cq_ roe7+PW/^gcGYNy{}2-;~`#L *9R&@,puq6L]ٹ̱3b/1gD@hܡV݉Ylmw8+Q`2s+lC 5s6bgCw0q{n ?\ 4[@^z}{G1v[] XmߌO?Xb+/Ӵ/op9P.ö/z1./ƽ- :i?Cy xtc!Ѭ~ku@|rog/bdўA 8˄2%ͻt&{_u[W\Mӑp“-ApK44-WWw::h͕aj-ZaU8nL/$@{9huaڛoxjE}[2^;M^Y7pˑ U[' RQE\"Xy{\薉8|0y1}7O5Jy`"a;z5Gs>cKʄJZЛtAde}C :E;Y.$L\UZ!QJӽ. nli#?7o io-UAdlbtAlR5.u2Nfr֢kQuzjʗzԬTTPu`:Ҡym&5!:fKwPE0cRN›1-.ot.4B%$ѵGpGKK;_^[hcVBߣiNL&UB%%s*34h5'FTY:K>fg+Z'A裣Ǿfa.Fl<0iNQ*zE!՞vW:鑬cȐ%cNօM,ij_O͹x9i*ҋqZJ9PJJ-Atz9)!K&}d s9;9h4vEyt_V 5Ȥus+ u< o 7mƂY,TGWZ"hFymV Dw*<}Xwvr}~qG'&!z*`YeDM&PRGt)}NAjx1+ 7DD.c`rYwk)1 hcJY%h v% Aر-Px+P(v5BbFP,F,pZ{6pPJ"4)Y c~XTי5Id(&J+12~R*qwq`DYUZUcfl~%dl%VmZb~ M[4C:kϢ;a$2+›"A[vg^HL)=X2'Y*XoCnVM5墑r7tid" J.YxY 4iU.o#!:M.P,=wQ@Ւ-0 Uyۈ'3!wrt o 0@5[h#}݋WPS*-kvie H'2߁+'KeYjČj^٤H}Đ>j ;"C-@^)`vF D+J ֙"+c%RyJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%L ,%Uzg@V'WJH d+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@h.)`p1T'ʲsT9p=V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+>c%]R9 kvG s%u+`e@u<@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X (>ZSXca1m5V vٸ~QV!% ~wK+hwa{x%Z,\K fWJy\rmy!֮E+d$c Q.Lt_-gw_>[ǷUж]JTb!Z~fm?vihh}qk^bdE_/pHc(|o.+?lu׿./-G_dr?=aV;5P/[y<ÖJfy~zr.%A?yJj!7ҺwDž(lqe :iaF~g{3Ai٤".A +) aGGyQ rʒ3FѢ.fYmv?"EL! ; w"Z@!2Y)=C"UP;` JWdݙtEֆ'a|瘮 Xo59 zϿOWQq?<><{ 1$h25ۖlCOfo}0YNXD+(JUS%BMdݵα1] @K-?WM'& Gp210fLgӟ_2sp|__^vWg{_ˏG5RmW'mwذ]_OOg*/9] xُq8yvn'ԭ.UNaitID="g3bUt[ M10' v} "X/#i^rq|qtGd ;:7}<\OiMθqovǕb]==\sBhQmڐz$ByGkH!bYEZvp?~1WZ!qY5&CGpiNֽ}w,qKgc K"sgۛ8>r^X/w鯣oVGۿW=5]x_ngjGg'ޕqdBiC@:qlg0n NcTؔly}_u7/MRdQ6]]U]ޫ߫w5xlZj[/ny3sX28Eĸ`.w˻37Z-\;k8x߅/7٨Dy`t4Z knw"mZ'Kdʐr"MVΑ+ #gf^^GI,WSڋz~( u8x6Tjq7ki2*h=u܊&Bh3׬dcNokb7.n]+,$ 4J=pdP!Nȋw`R7yJ!BTè ѩ;؁Tȝ_ȱbNÔ 5)08DUQK딢`y9+ff >rNopyvzяǴ/^%d0 0vFg-0vTu txHxUc r 3rꑤG>'4[N fB'""zYA6"S"b }ғ`(|)]iױ7b4jDlQ}OzÊOZC\ߌ߷ p2z?:JoUJ%H6ܱDS"t@H+UY!\D8P'S>bLbP2;sDRF-\RILCSa5=ִ #$@* YID8hjcsҥ: ǚ >8IHya|)o߈rx}8JVXtJ =BpQM(pCR3@>΋u6!uM6Ϋ kx~NqөFWƸoIxrHLM^qpU-]8$ Gßo'U7@~w 4n#] Cچ!H,i}*+V1~|up1:*AG>dۨ- DGZ&kH}ﯣq" Q%qpoT[gQ5 ԝ\9o~xc|w|o.߽v LMu r)cۛ0547*М|fW|qgsi$g[n@|_||ft}3:fmr {r* ]6%CrMլpV/ОQ}2Ĕ39Ҏ*{W̧Lʠx4mz~&VHmV.r%ke2y䌤+'kBr0)70)I}0yE>_N|@sNhԉx]~Ryl,TJb@aƍ'lQg;.fOo~ hbm}@Ykkx;6}L;'iIzO<-W((7/qoǦm2'vC+\NSCƵeUzN+a⦳3i:#y'X3A0ɵ RY'3f4x:FY‚S =f20p{*,xrہ dz°7vi{9Edpo{pi؂iW,R@Oz-w]i^ { Y{1>ϒ[s4bz9fBI=w q;! Ho{H/i໣? C!fH ^ H1[,˝UCu{ D'2Ziprja]UR RUJhvHeQ'5տr E) &".T[RvYԬ5. (kiD1v#NQnH+)ۡdXS6(r 8M(r*Ra[w9lVJHEFʚm: % J&=NYNs?| Ժ܎+A(<2 #5}$G j:Rj f(4lIAZaLvUe iv׋f| ;̃i<5fT _1\.G2C o&@JEuݡ;݁201PvX0B=B ?k8?dД-ӊ(cV x$ )Fzg՘IDk1hRjxf;dΖQl~ d|_y8Um_owHf<*omu׷p/'*Q136f޾ͤ.n?\֓\;2RՊ]^? _.&T@ZWϟZur lطWZвhun-7w(+[r==nMw?64h{p9oqSl^^ O&4?mm*Bbsn3>GQZ:T54(8G +j1޿$) 3u *%7iQh O߭p}c@{{?Y~ G@6`99tT+f&>xVy#0Muz%s P ʹ>xQel`4?/fCzKEWo9a/Zb0}͠ҍk P᧏d@?LI=V@0B8o1:Y+69$sOWģPkm4u?u찶o<"O59:XZViv('ݕ`^>.F)0`M41Fâ K-'Q;E"4h P! XSD|0arh~{+%R.m:Y4|W*l)8e>x Y/ɎY/!Tcg|EY[xxA+"P  L&y@)JSa(LpaUY%P2zE) a#DOpQ[M*Ffd j0]nɘm:%zrY2v/J.yLnӉ)t(K7_7(`04藓W.=a\V`DLTSbCxvSd9("A;6D`g9\MXO0_Sqo<{V/;l͒yxH_T3(=؝AW|VmKәE!|w776A $(6TЦWx{z{Y/lY:õ^߃- ?E*(ѫ;hZl0}Qӫ:3Z^7ԫU۱IOԋ zAӅg=v.@K{F(.nIyI%ƝQɺ3c YxeS,$p URJT-'kNW9; vI(+S4`96 nZp֢hZT,I+ƙDmkEJa a:*x.gIh$:EN9Y'W1o*aԜކ1׌ڕ rqPGXה"*%Eˏ@Oo %1+!J.FY<A8 ) "A*jh؟8inD"p;锍Q' ZiY4WԀxI'#wUqQ;kP'kT\o ,DWBF%O~c'/XCJ '$!Z9W܊vq܆zg=cp'l LӌrE1p҉*#=NwG9X8c;]C+ym6ގV`k8UȈ;Qtf۱W1\/99KnPs㶓g@Pf f?9xT #++"@jպ Ak`'AyEպgc8!c9nJ IaXDk 76SԵa5  4q0|1A$1xI`*Dfv]blPBAj_?k>p1 xMyF4/}b<|=M/WvW? R#ȖO<\A{raJR:9prד}j 2/*S fUR^bD1OA퍺J/*Sk`R Vu }`boU&W}QWZv]]e*5+Rl඼Dž4zڳr/׾_Zf#8"rBg:8LgMI>;l@B}6D W~1bJ$X27j:LsOWәJ%~jЧ\#.QI7Kvo/'j1HdWc=Oз op:[0?5T>z^PU{9Y԰_@\U'$(H9w X#cCjry@g oTO#>_'ji};8Oy;5K9\yWY21Ɍ\ضW#Vsx)Rdφsg}e'f>_7POO`\@:y}&?$osdRKCw?0,dmK,}Ϸ9N%-wc jOXsfYH*iVP>hXk&E.|x~xm!`2I8M4BhP'K[Ɋ˰[Α]'n8t^| 6! 8zRG26-X )9.N4w7l\MdT[<}[EmAa6V;E!2ay)Jy#g9x_-pc^x k2QoK 0 QRbrqEF,w(&$@2q9tI%^?h,w-1 )R'J3,$m@x %"":J.(;ACc(d87X&[/H +S01j}JYZ*J%^s+X.b4WRiA(&5PT4q=qOyJW4/Nr F RYAz2 \I MJiC p0e tH?s˜L:,/Q9?U̫og?{Cx#!NzV{rI墽c]e\0wf'vI~~=+//9 : f?t.0[> ny"j ޏPN98VsmxFA6%7ZOHt6#P(&WJCK?ei&Ri1Nla`qvtvJk=wl߼9]p0CDw~V&4ʑzh'N|+7*46`P Axo ȑ,RvD2&0U< %C!@@) iqFe10]O}"y^URh/K$Džh,:b/*B2I~K)AN:{O٘x!O ݝK઒%3lMѰ'gjoNQVZ8腩NyhU*TN8S%Hs*!K{ɒԖuzѹ:)i:)l~#0:"F 5khD8S RpOѧoڠʠ\=3fIWwgwDP.r+};[7B!Qmz^}n>2~`k?TݝG= ^?DR]~{3e?}43_w>s*++&6Z_ގa֠^8s?I5>ʉoN xW.MPzEVe ̊kEAz/unσ&\Pzn^b·amtA lVZ n $ !U /$5CR_]D2_**fyWEg97x+}LT^G흼HyWm^9Y!$FՈ1fhJ{ͅSmyCj Xi Aznl $ :)cP %<#:j>10%kΚ:ʊ+tBGrQ힛,~m3bC&pX7>(sGPgͥp(ҳln%"= g%#J- ϰ'̗;נJA.rX e欮Jeӕ$< XZ.ZQB8 " 4L'!l QIG%Z"ڌsOLLtj-oMry RRFqo[HC=PޅMf;8ꫜBty3ݩhZĽQ,%~3NjڳԷ.36.~q];vڍ\ڴQ$ե@h=:fOFvwϟﮚ9(tv s7ά_MF.{%޼r\>rQ{yӫ#@ϼΘz$ٷ'(x!se^B=ǟ޺.Ax=Ϫ겄֞#CYꄰ9˭滱!T¶Qbm¶Q6 FAL*fz~Yz9^z9^ɖ%cTcn6e cn-{yzjxW¶Q<^mm¶Q6 Fa(l{j^+NN'Z8k0-2PUTBP'KZ ؞1ԾB: :[vz1sŬ{2HKGTBJKJ #5Wn6s.}H#Z;m<8r. pwA!:F۝5glfV%­r m1(=\Œ@^ײPwV:a4YEj͵QRb,✋XprT$J/W|ŜCHύuB;hf250 )R'J3^%)I@{S+x"ru^4Pg;[Ӯ\ fדu᯼OD3OĨY)erP!.sT*HpxURέ`F(h |ٚkkZg"EM84phqOyJW^@0@de"u;ʖo>h*4O6Gq%!Qi\gmY EcBP Axo ȑ5"x+nZ|FR0mʘ^V(\*  N0CŝJYHznotͫckϐ 䩜 -e"3" EWEEs>Y& oI0J[ 7Gq|}me~1h ki{Gi&O]bryҝΩK઒T[Sxa4ǩKi>TZ8腩NyYU*TN8S%Hs*꒍^zAF J#`n54r"D)Q P)AKP4^x?^KކZ{w%<7cx-׫"+3yI5vv4@oB겟:@z&(As"P%Z3+a` Azay>yQ{n+wjT_VZ Վ NobG?6)_RX|o#|ٗzȯ?ңquKWË^Vɱ~fxNM0ds7uZ>][3 s|EʹH9@Xsr.p s bn`¸[+;3.Vj^_! S b)"Ey(<_|Q/Ey(bDn]xyG|x ӕOT@ Ѭyl l'ە{$!;l_Ϫ2 0O2DYrW!yM!@`D&;5=I&<9 *ÕA@I 1jMuB:?hac`k؞~͞8WN`%8 Q!hRJ5H€vY8Sj8uw b*;$ALٟ!%8,`8W$Us󪰬K2 0&,%$aImw7&D++F29Gs*6 e{^\{Y@[3A#w >&U[;1]:/c>,B :p%p2bZ ,qUys&8A˻lrxfʧTp'>GJE* T/I Qb%!˔E,uN]\в(% TB蒀IψO9?LIExb:kj8uwz*5KB#-G^u]ybM@AlC~*dq(h%iT4+"U6XIyPL^~RsjS ͥ eEOvwwkyOxn%ef69u_'n`Z/plJBIÒ''kmF082شGe }.-Id[-Yh˔%cYE"UL׌zE9mEDK7QB"iͯBS(}/'Sh_+Fv#E ̋ɎALtsߢ}8)Vu%,Q(D#JDB!;exu)QՌRBr8ezϥ%UHƱ`c)U1H*TE}iXdQAta1UN.|p nr\W4͝񷱟'ڠ=r2U1< aQIKʙIE09 ʁi;S pg#9YV'!(fJmc"^҇ٮpcŸXv`7+G%Ax®#) W&Y2$s7<Q[8@2z:N 2^1x"(\cXģSa1rکQpb<X?vԈՈMF4Ni"[)$BXbR0 5y4uQO;[(y 2"opjT,e=]52Jxޣ'@q(\̫F,Fvxwq,1u]"+i;e# "Z͎uA2BW ƼPA>bq, :c)* 9}"s: ƱCGBGOL? \)OGVcT2Ux8 @ޜ6|՗\L8J\rJRFΗ|rNB4z"ZQ4IZhE^uH ]@F5SKKr%SP]L l"39* Y@/}9}9ԉ|5_YKKheP{9s弔bޣƟ_O;F\ 6DMWZBĤcy"  cQf3cLYO턷6+iFSqJ_l%mY ^j?})v1ڎAbܾPd.)MWW0QSJHSw?#RNX3B^ghARvdqOZ߯ˬk9+_b0hrz\o C^W7ɇK*€9=0 bTmAL&ʌdXGhK*^'Fd>kbNSmN{!pAbf~lz*qRk]M'7G)7{yJ N@s^e^/caEenuohaEcSkr6xN._B.kI  gI:?`uBB5,z0;ٔn#I&4lv Ćy@d!k[Ben=?C)N:-wLZni t pd~6J MCu3:~dҘ# 5zxhsl06nF_[m3{&1pfV{ yg&ަ2omx7x2J :QtkWv66;lxCҺ W݉I7 ]{X45ӜDd9=/A8lb1z^3 +R^$=sOͱ*^o,l:z]u <s8 ?Pàx=XRSk^iоi5H;pMxAȃ99sA ]>-o@)Op4ШJyZH} Iq  |)˺y0:R( 3r iP Fb:rB cE0qV*D@K/FNwC$%RV}NHL赯ЦlOWqj#.C<"ub6G\m/Ww⼣6SF'wC!Yc.RxeS,$@GIbRr!T% :EAT gslp!g-J_ʈ艦E̒dbI&˺p_B{FQi)]^XΊRjlvV1̅f$DԮLА+PPM=*UPdu"PRou =}`' "V @C-)"rtDy pAR E;U jI؟t~aqED8d/锍Q' ZiY4p?(,q]8(:+;1UC]`' Q) AAp ʹN mXh\裩N_Ev'lvrަ!8rEq҉ ^'|ĻԉG9񘃱j;]ppf&8Vo˰an2S\5!DZ>PRv2 HMJ@`Ԯ/$ .4r:lP+?PQiqـ'ـK :Bx$jw?y8L@`Z 5ptqPbFLͮS_IA7< Օ %Q5Z>F&M#JcP/ףln 1RGaXIV2 +Cv7b]MtѺ$ãPpo=c@˛|jeV=9z 8ygZKԭ1g?twa$3Ng.g(E}:R]hd?#E_[Oߟ>L `<5.>YV8v.zVg?& (֏?6gy kn{]ڄ&W׿F8L*fP;v:YQѓ=!(KNOer?(UءXboQm_ +؟d)"a?65˧YQŠs"hai) <ɗy`NE]m{ {oH.'yPP#uuߚG,j_ ~oo57-o㷵VioѶeC豯s$(H9GgDR/~<P՚g,`寢T!ય0bmqµ O|*)B4f~LPz琿#@BfVDj3VR.D}=g]4B1ˋ[Sޝo2sH5lsϲs~YȧOy)I<ᰗYr5'x]EN o(9kz3k:SB+gW}KOS(HւO$S؋dՆw'Zc_=b^PA\iӜ|GTU0R0J^Z^0oeJjv i ,qX[xؘ#RF)'G8=nɍ;V#T| [ ݬyYa Ɲ5{Kpӗmμwg^ IO"אޜ D䞎G5GJyA R%q$Xـ r**S+xRwoQ]IP'`)ɨLaZ[WWJ]|JHw- "q8}?_/\WmW|a_ N̸}|Ih T(b%}pB./SZ̳?N,~bvA@$&\@8s?ڪ %QKxn6s3RMREҖPsڊp6C?~FiUqO֢ƪf-ˌ]"dYmwlFUvP.z[Ho>5ߙn_[nY:q_Uܵb\ :kF 6DB-ME _ygLD)-۩B(Ja9jNz.Hye$Uʒ!crU!2 ME8G, @tPv?7:;؞ %DnwSz|4iS6)5R[,*.Gu` I-b1J&ijb'Q )K$)䢰p[t (#g{zLucdfx V.ۼT9h<Vmu?o|zMoI\oOP{cftX]o&ukmHx?ź0Cuc3u!S+4Tغsjzs8Nmc8uز–Uí4zoɶٻ֭8v*,:?,NM C"۾ǯ|_FXK_.ΧM1>zx~ǎQyzt36:6g</У$=%.JqB2o(cFN$uq.gt)[2f>`L %@ٗggrڜdӵN?5Ԙ2;6&n'ަdB"4b&5>g f0@w =JQ|fЏ c lKh*ĩy(6:8Y0D*X~886-K7EH(1 eIIU;$ $R0q[o3}8;nE ;0ML%=fNw P6PH ZCVjEDvFFɰS8&屃ʤcV `aԧtèJ"| R稾+I̶IVD<[t~|!@^&HMX 1]pm™OV>BO >OATm`8ܒ,H/Ă)N%44༷ua=Ŧԥ [@ -ٜRDQ@ ѸC9L[wW+J5@{2XVz??sRJRְ*WXT|msDJ02ӌ¼7e ^}aXi l*+ .XT+"fK6TC:Q]Hcw#:zW#l*wP"dM};u{`VB 4z7b8-6]-s.V{vjj/ vWϬ`O bm-5{s|n[]epÓedJ^B -ͮ TH RQB!i^ S]c)0qõS.Ov88L?ik7[,bwh|ЃU/S0[jucs.ԭ3\s2Oy "o\=:[IDhP/>jĬ4`t%Z>-oE&asGMjWۖ3suӒElbOUet/ՊŊYGt*xTW@c,![o3-v9\0혋=g0amw՟j_ܒJُ_|"͓ڈ)= 4iT& 1% )5 j@K׀)l|{3TbHJ,GX[B.C @"@-8r8ijM-yeqs2gTF)ض}a7oͩn;֫S^M]-_+\vJȇhoܝr˟GQmꌫ} Jjq9(ST-u 0%]S(NiO0@%y-h ƅL Q9UI6bsEiWmF) ,1:k{mFҌV] ;LHme߁}>ߧ.lc;8(7_^Ӭ8@kKnM6JΣsQ*m. xwމC6kaGCYK98wX/~7(J -P7ofhA72@E7]JphAJ=|x8֧_|Zcީc[OYYZZ/tj4[#oWvD4__gŧO}Xc Ojf? P[tyY:'V-@]_'˯pם^'GD .1 ks6ҜĬNhrPKr&1M*睑?mۏyۅ 0_<;puZ@ ;rcbw*}5bK!hIMO鰀z` v)iu׫ /!n?yI$]W}U2};zV<XtkB`' z,%{ItAP Z-;AyO&AplŹs^)D") je `rѷT Mz\:R Ӎ>d,mu{B/=z zʠ d?<^!&2]<5B])a Dd7ϙt3O:wfvc!͌BAc!Jq:Q43gckb-P&-@hC޳}蛸$'K5J3M2Ĉh+K(^ILs5SMՌD:%De@G7&ԆΨ-Nt}9m*`r']Sw[[swuھK5~לu=(^/>Q_oNtL{88VӲV>19fC:$Z+ wrq/8;ę#$:ǖ*|,ՌSQ);u}E}ePӊ 躝;v@CȐjTĻ 'Zc 3%8&Α8[;6JGQ{ `r9g Ty9EMAce5dj_[^cӯ`ro9]G]1TLd'7  [Z0WgsRI潉WM3zqj ZRkjSL.d3X}1(TGc8fy7L'yqħS R[7JY)З8Cu( TZ 0"тqez>̮"s3,Y:uub1%b1^/|'1FtF%c&M L(?eM$z6!aI6yd (竓(;ӛ4Q;#9T͑JŹLKज़g1'Q3ҔXMD@L %89O1YSi%g#슴\Pj뼷I.*yt97z [9 8!F'K0U =ݱsZ8Sz+O+>{;oxăB1_bb~{߽/;&8>tNeʷϛVTg?fݥ[ɁkjD8/|rBj$m3)Y,\Dٱs:*Pֆ-aQ)=Trj%V#SspJc3zod8 z8j n{Z=bQ+ş|5br9c`=] <.v!JJ,\HqVEo)$X}ږ#qJ5~x(U. pS5IV):kK$X  =*&9o.ψrG_}%Ͻ[ow7uVS4Fvػ~Ȗ,|j@ |Gx33ٽ*-.[f>ę9n{C(OT\l2Ȕ|܊(eiN5Lg@o^as忧z._/No~|&]>HH ]hBD,[#Jj!3z&j$W @|5}joèWEG)3RI;k >GuGHbM]3J#o|F1Hŧ7]|4ULdj sFlT U0ő3KғM 攒o麾0ٷ2{lfW<aB5Ƙ "7Xy NpZ*ś)/ \ǻEKg׫㳺C\W5NBz7=o<{ "ߚ\Raݱ]]-گݼ0&?r52Q<O|H~ $R.bputnw{(%T M|wym3@YJQ@1xM`-1 q[M>U Y_q9?(}7?$NmZv^޽Ȼ$057t/muwhG{^S9<}+UM8Lw_//?x}/^0GxZ&*5FՉO޳-wB?!X{:ZՍ\Ս6O/\ӧo|t3x(񤰷Rh(P>B_ѵi5Z9ܢk-r׵|~o>̧fi9ѯ~濌gXУuj֜oSݹ9y_ȁB(]6KRqMQ]FlGV~0}ք{WxYDa'6AR. %e}/`i2g/JVi $穭C {Kܠr3,]8V䠟H"kIQEee@T4 [#̲qmyܠtp>dkziy-[-u-^ ;%՜G<˫s:"D}w?g :X:LP EH+pxԭi V琬ImfR13!̣i>M*A#H)ӿUF ^ 2M"4zF+ϑ]o2 ! g,nit'Fu;}=SW2do4mwoO^syzuTZYXq[TPɯa|v#ZX: (rNGY1徾ޫi+Ĭ׻_^58W>?< ?@zR x:`CpL: [S{${Nbge:9LumOZA*t8OIyfHvy@gݷe*r tJqm:r$xJN8MT^YPր];[v&WOz!eyqZ l K\eia'R/C s?)-Z}Z=<[a8?_ ?Fo_َ]=ׁR֩la}1z E POґe 6)GB/1hXJ#zض-Q 2H.:ቸ'bN!F/I9{[*YWRc4SoeJ7ZӳӋ~`šV ˚ϫj:z?oVo^ߢ唾bvFG]vِ^Noosec_kEU>; 5n[tm'd^<t=rqh ]jͮZys-_s>Ԓ[\]tG->dNDFk_2S\|\86WH bRZHCR>0J)ZG[`1:IH" KEhƂ*R@3q6é^ $|X3O\_%<>|]mn<[{s$8@>e,o2,n¯Fd,CKHSgR0~~mb>//Dr.2(TRDSh uMU:&j3q6*(a)[@LOH} ޝ(ƫ qzU YҎx ZM\QV2H =m#3gK%F؆-JHv=jF2ʮ F!4?"%Vi iC}|7εWdH[ř]dz7ta29:޸fX($TF(a[D_Y ABtr(fc-C BbJV)>±EgLIT Yz78=vth“˸+\q<a6 I~)%yWl*zQ@:q(91:Y\0`eBd,vjNk7gg|.[`C6B5' S7.b 4HQ6hcAw<.t]cXs8fqhCYWa1>X"Y- K#x6W|hmA'WH1;@QT$JlE Sq&{$Ct5X+q6CuwٴBkHUsV*̸|oYyVJq@G/tt-ngZ{ȱ_)`wlO L;ht1fO[kYHrg1}/Jd[.ے]A*bKǍ': G1 &QP:`0?E#BOzr P;KV@# sV >H`;BN]i ; 40+s1CMP q+[fA1x`rj}%.:vr89T^)}y;QE\G9͔~Oq5$ F5y"HuSSc{DyB i*`-;=-F2jn\~j?zk 6Uv!Zou6rp3wZvsA~v7߭l6~\Dxl:xo,DQ(MPF9Ƞv\:LDպng<zZGA 53x"4 #\Zwl6r6 p>|nTfV4|dbcjZ>o) wSoZO]W`h/t4'M-=w%Kt$ rٹ_\9|`S`k ) %u+JIZ)o$.'vuim>hsW<۝W s45(0y/YJP%;lb1z.fkP":<}:.*dgsc$^wv v05C)v}s=S<t5׈=qߎIbM.\&ICJ$%ۗLGvZ.vH$2xNF>xzBYUМ #8\MdT,ݺ<WQ#e +앃sg@D"F"24C:nW1 c1ϷhW[k|ZV\{Zk`ƥ QRb,✋Xp&$DeR|i^Lr]c3k)o" )RJ3UcӔ[(UEQD Մ‰{G.CϟFNޮ'ĨY)eTJ%^ \M?eO/G_ji&I*\󋁧рcSRk'A FWniMPA@I MJp!E7 $%Cӈ7o2! *E?ͻyNϭqP5ΊֿMU|p 9}t~ }yj~o; &OcgV N5 -`] Sq͐{_S\ G5yc G(wNtro:C<ۋCS6jI87ZOHt6#P(oWJCK?jG,>b6!}vOeqn+ZTIgg+yKƹ!H"sS+r~Mi#u&&_N'GW4*?SLMnVoՅweb0 a( '؂ۥMb4|A|t|o18z i֞nX[7@{7˓#*mqO8y*> /=sp6?#^k˳mNCHXx/">ݣŸ>ݔYsSsfQ}B8:z޽xD>:ã'Y(쵊,o]=]ˮ]wZ.|~Wԧ2f~V Qq0Cr~ZU-A&\GبgO{|sJ\JҥB!}1*! 瓸͋2VSa#)툶eL`h/y %He<@R'&JC,Hz݆9敕|-J -rr. \琱9 ExQœOI[-MɡVy[u6'^H5MvvേK7yג..u3jmWl,.[ FjͿ72xSia9VGe=JsK)é<"~\ KHy*'t™214 =gNEHv5xw{NM +7oNwa[(FG4|Md54C'D锨 icZmEMfG{$Wˋ_ CXCUrG eFa=b+o W؇߽n1:ZL, ~? w{leߦ¼FVb{^5 <~}}˴E|\M]L%*Hjm4V2}0ꅁ=tQ>Rilr>L0=9vH(Z3+a܊ AvgyV6DB-M Θ FOZ#UCi׳go%뺚d2:N\(tO)T~! ct\o[c"ė2\rҕY㈷镖T[Wrr{ҏ} d*sш1fh9P% ^pJSbMV;oP!m1՛`l $ :) A.Q3⩣FksN"Nә8; B+2-&VO$vLk3മI䣧D14ޝR3Opi.F}l<4%Edy2guqJK.l4~J? " -0/)D%$HA H4'&x&zQ\wm16[HM#r3J{1;i&% vۙl^U4N…%ڒIhp4 LP%&P'd-#@-tӄ@P*Fjk%dJ1&kCvsȳ^OYj *[\Y5cMQYk޲|&RڹeYAqYO.s oˤY&dIdsAKO ':ڳ7!7!2[kB m&F#H&FtoBh/ $qT0)0P.@HQk XT;%ȅ2\3i!>]O` rjr+`$s;&s.oS\d6qy.]^o8\v+4v*&SOc;yBcW yȜ|QY3Y\%v%6Kz(%}9U~ cmlS\6%xyk/((&Y4hzCKJ =ͩY3 ujS; nuz;U= ZU=r< q=w#&GF)OU$+rf 1bz:8?5լ6Py_߾'2Jn, l q6Gf)Q >AZDVmgYJz6 لЂ\NZ,.],`WYJk+M+ޜO/&+ve+g/!FvM;ѝN7RF' lw3[gDi9!ێYJk\C]U]+VNzzpz fB \eqΰ+'R2Ջ+yW/`W6'R WQvEϊS] f(ukԞd.k]n#ѯr`B{j_bqaJN2/(KDZ.i[j2qe3sHȟtc>\J.u?y!yζ1Ϻ/_龎sx/7sK^$i̢Mu!y$5gcfn!DQ;0ZGŴ-Ow|⹍r!GD ʤtb7lO7fzɶEpo!\"wBmܠEoζ:R* C>B2@20Rn*PdM7ֳg0$2ıN4* vt+QDW_.Mo.=Jo#wI.IW:Es}X)n~Ūӗ/}H`Z9GnyAĪZQ|6f̆#AZj+&""FXs\$ ʩEM} տRRNRRB2>#Ť\F\KV Zm!%F1ӧXMLP7d7$lnMj٘!5#rj\[`*׀NKʣjSO;F1KthxiĴcիA*E8q[E\KEmll!tUl[{ם,ANVoic1hnpe0|9pRg|]~`S_]K:'MeaέF#7O*cY Ͼ m$h-! ESZ} ,WRi ┉j!#TP}B𛭷Ƨj= 7-ǟnF>-y_=ᰞ[ic?O;I60~+=W]_7U -6 'B"1HGmb?-^ijʨFo]F.&A>YqhRɥ0TF *bދfKj5a-1gaζOih;*'S#~=1 n>vT1r꛶ڢFZ[k7jDu5  e2y,C UILhZ@+$e?b@ģI>uc Ɣݼ:L*+T3^^"#^e>e5[6罞]YYTőG^>_N㥇+_.%Zݑ*=Sk%\o,7Ş8~.!PT޿KX%<7g?Qcqƛ+|x$HinM̓&tPh $IJ9 9% J@kW+هĄ9GzX@LA)Q0TLMc&Fb Q2&7'i-镫PlC>PA/LZa짎xJذDLuad܃]-/'zmʵ~׋<dSYq%i9bhdTAktbWq~;7o[bLg]NΜMK./K©DReK;j9Fv6zRn/H֘עN^B8T57KD8/ԣ{ԇ+>k'ڻvy{/+$Z[6OݾᲹ=jY=Xˡr 7/lJ5ƽ+&s|*EqI6Ti46<ACe96ǩ8Y(ىl"\5.aXsJ:ҿАecW;`k9ow˺]~z%-{sI^!]Ži@BvsH.db0/@E K=aFAk5>%`(30`p! ϒƢsΕ9$+0KΘ(N_#O#Ζ=]]*zu_髭C}8[b+JfJ6)w#R5CvYrbKy93xn~.qzb! 4ĻOz`Z_røKv7bhIZ 1l80Wɠ(Fݜ: >Iؗ!pLt2t* yjC\_._%aX;ȇD;$Z;gvs͟y;(׵= 'H-h}~SG{1=3{FNM]N_!/\]ug,|@G#/k5?v<)̋|Zŀ`CUy$`Ebd?Ph'<)kZAG8x3CԜ9Rm^ Q^ةGG@| gBk3 >NKTZ Ӎs0݄aNo;̩ Q8$@TSj] $IZ,_D3TɗN,jd~zSr{n~HxSZ^UVK$- 3E1Bv`jl+̨.l1r@4ΎDņߣI6RmJZj@2)H)Q52߂Y/nnn۶cgk_/?r%[bU|##7,Yhh8䳙1M\fviU)y;1K& _[1Ge/V%ٻ6v$W6mxYxRc=Ij7,"xun/| |)!ԉ3ɥg*"67;&NqЮPf$us5Q9fo&y&4E6Fʙsg5(C(\]PR0$4EΖԌNk^Z ߇Z d.W讥č!J g\2iBRiH^&=2& vEW4nX3d2xdJ "YAz̑^AI FiP mD74qwN~H/b{(yQ<=F[L ?pBz2PwtQ V*|A q8*`żDO}/Qymlu>Ⱥ^kƊLǴY:ٌ8ܱ@禯&7J6z*o5ШDj?.||'TG'?9ד'u|׏ g`=. $INpt-w4ߢkl·W79~oK)پ_,1Ϗ?٣rլr`1pM#H6E?Y%UIT!~ˉBe*bŸs&$uka{aﱓvD2&PU, %<@R'(!JFZ뜤6u^Ф尖Io,WD?%.rAcz'K%D3E%uI ֲ1x%I=׍AKwUٿ b8o0\ݿĀ_^ORϢ>—kڻnT.1xs.Am5 & /U_1B 9Z\6JȵPDu9u*x.xaZ6fyo9ر]C5:ݶBݿ﨑j"Qr%h`#@dMQS R0 Yn.zhg2|nׯg<_GWP.;.r;8k\B,m^oiR7mW_?яaiz̴(޲a5ϬnXxa~zP5i홧Pˁ! !3_/֮7Sxa.XzdXٴbmw ٞ* $^!NqOUt,,=/AȰ&(94"-2 vaךZ Vt7?UdUUSƪ\ç)i1hMan\J;eL*EѸI 3#'Z9dMD2_  g~2fVvXVlVd*sR\!4 4meE0Ѵ/(czfl $ :)cx%JxJ<80Zks<URhiȹ?XЯ-6cPa`RdĪ熽8 jF(bL21)pI^ jt`hIL"cY\ J)xʹ,W]gS{mp\] ma.w\ .,vKYZBHGHjCwgrA twɶCwg*to[qD6_j :gR x$z"*`Sd04 *m]ޡy!iSd6/) `3)4zYkkh)8n IF*Fj#o3yHF%`)Mf+Qpޡ1rTI*ǖ1%ojbSEfz+M?Jh R$LdR#hKF9ВY8i gU\dYr=F$Tʨ(>ߡ=n޿ZO}>٩ůw$+amSQU(W } fGSrqbhGxk/kWm<'j hZp118 OB@[B@ePk hxwV(`eRC= F2uR"ŽxD,F<`uU)ԪPbԚ@=!Qr)h@a`ȹa^z<mSii[ ($-.ZgHE0\;JH4 EyEL]fjVט GRrpiH4D ,M>K "d@G*FTvW8(*BŻ3]9V "}$V('O(oCA kZA.q X?<OALF :8-ײַ 4$ӦxyR@Vi6 HLYLpN3*[msL"s`mrohFeT v jS} Q^ Ԧ,O>IYr WztlTP#'Pb"i ȿhC!Q q"8 jc/,ͥz׫% e,|VU^VËqYᱦ cˋ]4We3mT+ߢVNp  O:Jm4eJDB!{YxȸRHjT3JR9E@Ā^{-TH`c)qb "i9[l kViL6ՅӅ{ jMe|x7I-˦L'~:O~uCaяA2}e0P.TiQJBL.F'QA2'qBWLfZ@ s2&N^r'yPзC7Qn0|L+t6)rn4 +.f[vcm:Y8> Ga R%(1!CҚ&#,sCdyg؆=&.wZCZы,!/iQQuٻ8dWZeG*K>쬴Zg; } KtS@]֌ ]ՙQYq"22jr16ևȹ[N}Mk;"UŸ-!b H钁Ѻڐ\z9eݢi-T `2-L@ZGآԛʁjW+J E%&ȟTaq6 cB+SQJ0Jn;՜LQvΰ$YP`RdΉ@X!҇J[˦rE),PeC%rBPAyUY≜61r֌ 嬗߿1[QԾ!){%X*#qBBJU+VSJQZz=/"h !QZ0uW#JQD4B+V\X( ?BfP+lOzYݧ^IZ' j%o6H＀"z3@|DlӨ8(9oM6l+x`òZr,V_^w#`Y ^(F WPRRZliJMG0»FikaNoF,Pk|T&w^"&啲j'M.qQSAd*xufZ d"2G;c[U|͝1{AFv @AԈ+>>{#g0Ժ cBt֣62;eIGL:(b "ZmD2꽔Y@BD !,:?Z$b\`tH*&ZWjE[TٶB3|?oP+e|`ŒP;㈎^i׺ķFGW^vN~OjЬ_;/,#KZJDFp(`ٴ{'5+Td&bK SԶtPا Dy X1-* $IYAܺyD3rGOju> j{UeC~'GNqscyDBYCų"45Nea)R0j",C\2Qx}%iza `;"2x. ܀oGk}h&y5'+ƿѕZ'J#ǓhF`Up nqV7)K)DWXu@ޝ._˷ٽA=yRa\z{+19Of&ZJÛɿWspj~Z^?9WO&bUr2TBфc)   m%"Ih TXϠUf>!Ţ UEIz6 ٟflح$ܱ!soV+8E}hEg77Z BZw1rU(iPd}k:XolNVazN%6RFxPyyδ?^b*QN@v&>ZTKBD$Xo%LYw6j.T}gp7an9dZœV\W PFޏ9:9Q Q|myaSIS s&h@&G*!G212kyDn2ҏl2"y9qF7?~kՉ>t ^oD*`?_6Xڰ~hEIɁqͼfhEbFtQfLD^Rmeȇ6>psi,zt|ܧ{vCeXZ: a9PM^|>4B_?k,/L*GXe` |_*ՎNoSq2Cx pm];MOOLS}[fPhIXgiPt׺no=O+8>~Z⼛}OLҲ-ﺛx1߃ rĊ 6&yn_z|=E hYYSt:X#IKАQ֒^u!’K:6Uؔ|DR16xVɇ )ֆ,2O,BoQ|BokZ[z%y 3pC!!Y)`)%R"8LPͶh9$<]>83了f.*X>GY9K9Kk{ `rNJAL'Mny+< ŌG3Z|՗ﹲY*KD^Q.Ѩs6 M=u *I\ZGjMBF4=i( ,6U:K : RG}.K%Y 6L>{FAj+H  >Y2xi Q8!uPxT<ɟ% 2f0j*F5f|?N/rV񦾧Uaxc./'{`Hb~C\e5Z{nNf)i&?d" ĎdGNQ -#Yc_ZvgGxtyg~ pB[AJI,H"d\Zm?,;l|D>YOk+tGv?qHGmcǛ7Wxt %N.r-_ڣr. >:p Ip'Wiydq{ 1}ŕ ߝ/?x̾$&Ƞ1Lkڮ&OgU2],B L`nph54zxufyB4;"3jsNJ>!fxVf:g:>燣\>Lؕ;rrؗu/-n2Os1t<}f?:?}w_~>H?[? Rf3H{4 ??P {Lm[M6@˧^.&<0cR|zgv/Ln/@Ztw3tk^hӟO+z\}f1?\T'y9DOשr!*īu#2{PA87پM]E;$_AW$aIX2Pd|ںAJliȎNsGV:mmyUq JJl%h%S/:!T%*+˲Y(:M6u*۪O=#:oz=hZI(DّAK&(FZTr6 R9CrKj&t\AzHNfI?jHO]Xd1Rh{aQ#@ tb2]4P4Z}L&qtuOf;{-hRw)x>&|_5lz3\6/rMЫ_>Ύgm̿@:)FY@BD !,:?Ze[ &IWSO#{${Vbֽ`0՚VS:BTVn~SH3P}̓!M`[L Jܱlh(EC9?{W۸Oe}Hr&v0ױnl+ɞ8WMQ1%ZjْbY$*V=%}hQutDwc之,;j& 2HbtIF1pG:*i54%Fdpe }xn9Xύ$S'er_OD- <#:j>10%4E1qeE ]YW@ CR9m`Aߴ"|R2℩&pYAࣇ/1a.!fC)O}(L̿F?ԄsƙuYgsbT8E'$ÛU*cWEd)sJ5 )rT 5R[pRYIN $EU6F i&F<.x\nu)K$)P5"̓1Y8 XL9%sIF]]@:Va$}\mؼtZ502%ny7ū* 3f2u wz|=tQ-murhk~_?>erj]_]Goohe-}nݬ'đwլn|?xM|WFn^9-}Qkǣԍ>{-7Y yd;Z d˥Ю-;PQVᲤ=(Vo|݂jS̆rL(諷PW7| dPy<%;sazt>7e}9 )SX"LFub`ex}5iM jǍƿRPcN~Щ>OP)x;4X<.='0Q1' D*r*9KC,=D~rQ8!sSN\equ/s<\R\Es%AY]Ҕ d%>E8#EoG+e:NM/.&DM)!VV"BM7𩪐Wwg6_^9y;b75d_,s3y9])>rیzP2Kԛi6n<_?_Ym{v[XBk&kj !gg<4tc/Z<C6xꬸL%: [IY BTi}%wwkk2YȎb9" 3RbTLZuw*4ШJE륖6RB,yEP <MO[YHS(R!is79#Ѹ!bQJᄕS?4o1qQfzDՓƽ2LUt+SUU̼))h#ns ERUp~DyS\Nԅ;%otRqg(q$!Yc.u,:% I`]*f)A[Ns+D5LK4׳JiGQ@Y5&&:ԢaQGG%zZ,WoFu-r", fГQ\zh:[jH9ֳbtԳZxjanXx9t xc s!D +"& ԣQLX#h_SL2z(e7~O\X 7odNp^ tDy i!) "{U j1-I/O0 g]$8ڂd/锍Q')DxfqpT#"%N: G1=F+c<ϲ&\cTGW#Z[U7ͷގ;T,BTB' ? 1Γ JI'$!Z9k[v3m&v["*.| 4 1KP`jt\%+dBU-|ĻԫG9X%1hGvabf7Vo/IqHUY~<~±ui+X $2"!{1%u+˞?@;U&>R+B"Uj] AVN :g ߧʊ;SA9jve婰' cŜCc:p^1P q_Q0m΍ B S ӡ7tL|ز@u=0}CFئFM-wۿ'Wzmm8yأBGrAV?JTN 8J(Z^9l%)a Bzpȁ TN֮#쑄9' z| GTSEU"qCN {Pp&2GA9KVPYapTLmd|_(ݶpۏn|4ƢfitXg'̾0`bCԄ{%e &N (扌6:"XƢ"aLYO핷684NCK-fT[fԬ"3קdVf'g]L;t:]p_넷pDp'cro^3N6KQ 4ru`gm6uߧp_"~=7=#꘢\ 5G]kwY Ϳ-41h8X|nŁEh, 5^5yv6ns}t'FwiɏiArQۗ2{d8&tSf ae{wNS8kpdn^lpy/Pmn9C9^sk(Xx8/ ^TJjOȿx%qA"q%}}I9Lx qs Ta!Pp*輗,QͥB8lb1z!=Bs)F+ЧQq[;tһl, V1KPYr8JDԞ;e4`pah?t /R5^Te ]z*~lQ *5 "}8ڻo(r\:_KqC3v)O-G0'(-p!=jާ_chYpiv .Cm]d{<ٻFrcW$mmMg?w4;- ALL̅SZb 7e|]:;6y:X~*B yviǽi4H] <~s}R+,K9St L.rLdQA&a 0SPjYva;km-~ջBltALO^}tLnHxYY!CG!.'JȽT[8!\YKgOg$ǵBXB٭WǗ󼩀=n7Gů<,KT,[SLd<_^w5Y eAclC Jx`2kHjp!ƅ\$cօo.F\R߅_+!p :IF#ij>9LFiiU(aaAט`$8G. 9k^e)NY)XVه}:9gK`<_8B(C-mG=0]ͳ 06Y^ud RwA~+Z}(HZߊR `7Jtl2ǷlmZ k0u,cȎin&]e4 wgs\x"/`csX F{tkCA2A fJ#BA9-``/<|la*i|i>RBWol@ y29Zrp8{/ybk H2 -m[9SijqyIv{.X͗K^=lx-ZtWܫ(1Fb::92Y{sI9m:dcTP49$\0fV[F70P1ZBtֆl,`\F$$hF;}>(4uh¾U#̌mJFs1BesufUbNEz鹖})j犦a^wO)cBЁ3Z^d[TVT;ĠHu!3dz[W4B;3o>9#ܟuFnJϻD5K^y7{tr:{R.)D8?5;R!WYvDK7F㗖4Z߿ Fv $vQ `atǑ,Y b78:80qtNlh6'xv&Y\[kKiV`=w]bVy&/Rč4nݷ|,3(M-3WVcsS9t=pih_۟?-kлw7yh`z\UhȐ7qsE)- jQzݣ#B?vW~j/p5?]|~}+J3e<͓i7`ov-H4Og[ $Mm=M݈MY,4=o`GtrM| W|MnxVt`Y|MKiii2fitGl#>:vgoE`[U4'~Oˣ082G|ן~,?}_>ra>8P(*6-?%GMwtjuM-{t-S/|~MG#F}ZgvVKn@F?_}a:BHyk7ۺU۲OeQ\~&5?ƳQX4JR)6YlMF ef|wemmoz◧_']')Zq9Dmm/!I(mrdKArpR['9 K״񵞓fMd5s|BQ.jƄ }KwF؆M=Dn[mNP,`z np4vdry`iRz5?; pTtCۦk6 @:J+ l[k_Zyd.kB[e!89N$I,EuJ.%%0xmJ󾔺L\dՙCe,8yJ{*oVgτ6,ڐ37`e!T*]m(| o`YSݙԶYbb޿yۦO7BnmV8ڝ ^'3;"1OmEjJj}rwݽylG{v6Hwb四w7`0٣;χBggΟO~v74Ww4 ah~So[M/o7psIt6Oxa8Пa 0s=vhL4IM$T L1cb &2E%7IWcEfLO tFl.E3: j[j춌J5[Xmf<-3*~{g׳"xw74n29<?gؤQ-s9f $@q>%Hі*-Murhi`Q&a+Zب3vfS9̂6gӸbjW}vdHx#H΢ $ΓR`HgcDYc^p8-wf$1x#!!G$M&&Hz s>fY :iWFj5qv֩/ >R~g:E6?hհ5Xķb4^1)9's:֊! m>JaD6+!j}0TR\C2!'-PC9_h_-qv[ϧ^'8[f.)Ym<.vvqfqx%e~j>sQ{7U E>To ӄ,l2#YdCj5+6z r_]ѲY!Q{XDD)lRqIsj̒ctpL{. Uo,3MΠWGC,ǔaPTRD#]W۫G҂|2=bxxY]*XY9Q1`D*4+,/xAsQ<зW-4? e2]Аyv: Gfe Ʉ* ~3Aw iR!1'B7B[Rb#Nat.["Eey++TxN)WC~X b,]P8K,;Ϙ^ )#rg jPiбe*tn]{ {&mj v/^ا۽$d Ky(RKpzi73{0mx\e}?/y`8 P$nq JI+{QvHx2WE`\C1WEZ}YR\AsdH`[{( ﻹ*RJ9hFWA1TnX:|a_?25]ϻ[#&yipѐk;d"Yg@+ͷȵdI`Rƃ1E\q0fH`ߠRnFIkYG37*"ܴԁWxD.[9޳$W,j_!<ͬ&B Ylaƀ&9QSﮮ9BsoMb,mXδQ0T(o~O.;9>_ uV[{Geɾd{NP ȼ3-сIAZ-wAm@.7o1oQ\_&$oS&w=rHiZo߾c&0? iM}N6gR3@[Q\Wt)-AIFX둱d!rMl^O1d '=sۆKp3^[0R>]O%>5pz{{&Gwr~E=}lҫ8d9׷VB 617˷6yb-ܜ(kbBW(؂u`]Wi Mh9)TCS4xKg?Y]eU sU&b]eWeɮ0_k$)󙖙o^j恆@IsZh.Ҏ9C wRPyYWmU@,b;h=XF!rm;IsCү-?!Q qS"Nka>~uܛJЗp[HɄGdOBFtr)P~B JQdXBAuF/,$&!@r.v{$5GcM;O ڨlTs8o|˦]ko*_CrUumi`ĥm2մlOJɫϧޫ;虬#Rk/e[0R(H Sd%'^+@wGqƑ*1xChQ٣xgdA, *GϒdS[5,TU;<)hro"s %\bA@v]h@koL6gaSFTI=xH.ZB@sg|;is,+L(lJ"ZͥK9^DP@2!7>&EddT`Ƒ?m[v[Cgޅ>)?/+0Ϻxݝuv<6"63_C_JO?' rڠ$EB&*xcd$0:1$£HB˻wm' "4àWSBXcR/]`iJ"9[|X-wJ.Ty&dɇ5< *A#S4`DOOsbOpR)(:sny21hs8 53F9Fdl '0:c:ӎOb'zEO'y4ɣ SVKLbAz$rPz,e%B*&2iGs݅Bw*1xfDurulJHc9yޤP{2Ci!&,[2ӯfEϮka8c>d @%{ !^d ۶FwlV_āT'Xe5T90ū.=Q<%p>ާQ=O|3RKUur9Yktt\؛w%y\r8Od+!jԍX y&d.õo5M`]&lWpeYpW&zSL֠2#g*R*?Wmzq y ZIkk@%yy{E\t[R?C=cއٟLOCjHlϯCrDUjͼTmfٞ3I| ẂfQ$יBbORa'$Kx4Mip8=a|ze n[Zy bm>pD 8MAsvCnR25K#kZm*_fFJ`k}% 8P3 +6δGkX"uNΎo" tvpl*'{~T\N;WҞnԻA}]厲'7GV:'ژ[5>2Ja4,6u?y/.XdF&fR(wyDl=an|)\[h#>7/՞gܭxqwIww5L0YIH.e@!Aquʙ9o ̺8z+ǡl)@v$F$v[+8CPPj(*DːR@ (4B˜;׊+aڅFdǖ;lҲ]j  G]+Ly$~9GN*l[B1ggnXc}##ckw `ǫAUE%uq?xHzEVx\v!Pf#]LFAvVi4+/3 %I± |Ґ嫹Kن5͊;B&HI-ѲmNj 5/|`nAk^úc$IqfQ'`NxbN`dI$4pr1!5?'yKi U]U7fcQk48"&)! hT;C"("4Jnyu0 6%)\Ĩ!HE@\iYTq!ҵ$d;\BJ5H0 HX0_ZRuPyUԫYR̐3`<{bxgQ c#٫Sz 9%0E VBpdNdJt?~l(U`=w]bVy&FW)Kڷa8{@ JiR:v1Ş\Ew*!R^O=^LOϮeS~XkMK~cir7q{AzG:?2I{T͆#X9{sav˭0 dNݟQ>Vث] FoE-6$^#cM3Y836MM,{bژFJ>2]N.=sx:6/buȦYJD'fjI9vHs>8>$ƹWƀxv.a[@Q;*̀ Gǽ0<;"rwo]o޽?`~oHv`)F(s+@?SS kL-Uμ%7{}$^OBV $oP2$.gOoКU(WzYjV?J% !>kJAur{wG׭m6^w$Ek<3.@RpZ$iJh9j >z )G+ Izh\+kk;u5y69Zb G /4zŀ~KuZ9٩lhbl+f[79%Pw/Xt_ZaWuKϓ$[\[GlXԣc>jNOS@KkbPEp%AĄk:g&!h o¢% r{gMCp|zwtpat3Wn)zWi؛aESAzoc'7߽ovl~^G C?uٚ!YAk3= %9KѸA=Č)۩@vmԱ{{&g)msžbt=I@2ҳQKGh8A#Z p'd^ҫ}ɟ%0(&'  ڨA drPݑ ru <SljVATrΫ !TrY2W8rAA- ǵ';!Z!=v*CdTI'^6;;Z و`0j`p1[W7(tBHu),񫳶yZeȸEi`YcL{V\;E5qv߲&kP#6%Aߔd{:U NDSMnAࣃ1VVT\-fIoѥO!a)'}&ȓ{g`OǼ!ZH] ,~v𳃟 ?g eX'Ʌ2 .TrgQA)$aF@m \VQ)& 3,1,8Zhs<9#3@s8{:N d<}J'۳17%Xl?zhO!w3jU>l 3 dc m,CG.I o<62/UFzOsUWp1N3~ ,cZ\PJq6|6LrLkePژuWp5qvq+XǓh7Ei,Y[ 6EcQn+:,KTK,VղlT1i&_{HY2 ͒p HF RLzhç: 9~&fBc'2! C{Q2g"ZA)tDUXcDQZZ|*E,ᚒQ~LB>4<2:#8ѣ8e%)[Vg7sb%AeVs/(nv1zmx=aTN>VyƮsU[Ω~+[Ъ9 hF%>5]^hpf.E>\r޿O7( lW c$'Oؿ񇣤5ԜVڅqR䥭f.(YCqcµTY  ->x3qA] OQ:/7m97 wY|v^]{sCiR Y|bO{mD 3 }_{M+WzX~Ə%1޻=Rd_r~ʟ.$}p 2ȗhy h)Bx*b]̤ǸНi'R[]]Fo\id3qÿ Xl; C]n]?g]NcMP`KJzlBFD%t! \w.pX$\60\Dfq:% )$_tBI.Sr))fSob/TLc:3#s;+<=W/ \_p6n@:zR__>s tkpyxt>"DM?ηUwgM?إg{ Ui8j+@h7_E1oml"K7/I[z{؝JevRl*Ēgnoj+5>O_2޵y9?ߝmxuz|G+ޒly95_7fJ2Ohj2no d[#(Dž+|J(7$dn 0hF6cL:{ZPP&KB̲T]P+SH*%Viݮ# 8 T8;$6>0](7{õEl2~D}BɍΕ ev <=O) >51r&\:FSʣW<>`m:*V 5qր*uk${g^Z"zm-Y> {J9,Rf`'s^g|J|Q0ʫ GQKܼ"AQ_rR6bS;c?;<䋎:ZU 3|uQ _=hmSUN7h4Rgze/gMfFErr*$"{%YLP0qIUj-VypdFϒbr<"\b1eZӖB&.'͡Zхc=`1h''k)C vo?<ޫ^#^ԉ\Ufp3<$MRe3 T-ԘX@$ʛjFet1{Ot zR()'uR49)*cLCCmXM=:cFѓF(?~q.F-YG5qFbpp2YNo 2xph{4Vno$~JkG+h80 "m D9F&]&)Rڭ__{N)J94yISoǷf\X^{b!|I.:cVOۜ6|J & j  /Ɔ,"|!6$J T Yҩ|6d!^K|ǬWH"m3%BO7/.f©.tRYWK՗ktڻϮz7f]Χ?ӗ:}X'ȥKgЮ)U"00@rR(s, 4k& d阸o䙶Gl} 2i*بat3͖`z{+-*߮ɍwmvl~^G b-~Qۧ^zww߸n*,JA`Rml_+n`H6>\7uy\25M*ds&(%%TmDQnIzFe;HHIm` *tJS/IdD;Ƀ5fe$Y18:h,q5U rhrP߰+@h{եk3}bߐ ΎҤ-z.e!ޛMX`Ehnoj+5>&іODݵy9WN9|GѥO&/Ox[SsE}9ː2O>_M/'1諊j,6\2 -a.CB8QVV"V s ASX,==|}]G_4+j] rL''9Kx |J0!eezPWqHQ92Vd R]:ýٻr#WTyJheUyJTT YRDϯ/2!i(L=AJ x[^]Œ/I}+rKAU]Z-E}d5zhrmoF<Tv7U gp^α{XǔGG\?(~=ϷQ&+I&UE0P%Gm}CM8CMp'v˜P-PlxKsVOעY ^UM"d ڛKͤp4|ӣodYPb3{omC,.C sѪf"eKZp!|M=h,U't ]kM+(oL"j0a~2o}0>e,Z"JE[ Rk'5_j)Hjo%Rو阜r ]mE%񐼎@^f MuSHˮ0.ZKSl+sWdӱSV| 5`7Uh3jTrf>8oB6k*RvڪXm>`wiIy]c2O;)`VjLJAI"ʓ2.Bт7G;!ܶNm[#=Nj;[6Zh+<]$7{a x+Wp 2*MRL0i[sxcǷ$®5:;h}7?Y2T X;kv*;*BACႥJ|`)QԷ=g QXJ6*ԥvCTI"RT˰8Sz9 'S̻=⪷DO}_kw3OuEp>/E}A+\XKrQ֟|&e+}@ztp֐$P ['-O\.RI#fPq#R"R$죧$.vA<')Q Us؎4xhhЎ4i}BҰO:j 3cb2,XyƐ t(AqWCeT> .i'PCf[V G:3&1jlI+ʝ 8 /gZ  thz:>ȧ`IU /[bG!"**SQ:κD"#_K)֧m Ù2Y_jHP"3QkQϽ8{Dv*ݰfaW*0>4f<4rϢ1<x}}j:{-6XXau,g0NHɉ-b\ҎU xq|vġМX@ZbMNarXV^*P񟝭ڢujT}VI+RGT1g}PCJLũ!j rW>۳ =**ފQbHGW9OQ4'Vk'tX_1TYT5ۡ:a; *Bb/o\P-Е=9بLg;&Ξv6_߀bקR2 ™@tK R*(4,jcJB TM ||+=p_q*,BvX#NQl*NCb+TʳSR5I/N;ϚUm#XPK>2SuHf|T)6cFS};^:q`\Ͳ@CS xLMJ^R(_pcjKѠsŒ)E>툈ly~f*==tN*ru.hPK}pOҥ$ ר磊$JV-%ȫ.#U0R9<;r4P, {0m kѪrd~d-0'-Ps0FcL435K s+'^jel5)#QىqZ$v`I- s5@?[u\l+|/4&DY{.deB,o/AIW"T(JG]Mb ?BqJD,s%WM(K7eOA Ϝt7pisZ4FV˚9M|z__qdG/ĎTz4)?|ߩ9jNH9!G$x-#9˒d]AKN$K;yq R$K9@^ٕ\sL;/Rߒc5.y*jM!!P7BжJbUBB%u̽+ug79Lo3o|L6V \7nN3WAbaR&{#\IeĭX,fuv1%{{#iN@%;vrG0x.}l෣B3ڻW8M^M&.8M58IӾi/icouBN#;q, u4UcX88yaf_|8se&ݏ_>؜x}6B>HzZ?]|9yxgfgn[ Kk=A? Zm< F˳zUGdc |VkH AFT(4fS6r}SK!"kъ@hJc5HW5#T\cU)]7ql|btJGvv\U~6}۞|qzz X7^Zz 8)?} r2PkZ2.3Ⱥz$A ÈGA˧WЦv#0>`0,*0{Q1[q-Z- Tbz sJ`3 D$6Ln/pGc\}e\ЀDdy QzuZZT׺rРѸ%t)X61EhDx첌3=#5\&c<@)ۤEW"!0Y1}G.nBN|7^^66̳75&؎Zy,XTr)o1q[aa,L V/{5AKR-%2IWA"gezvn$cly%V; j*N;7zGEGEMbe3==2_o˓^AOg~|.Z_󧖜FahE}D'}ppl/}2, ~"~=^n#e[!a-#1|X{5yu2nM9x[~=4 dCXn>5k,8kDC1b)wC6?/nFFR7>^ne~~Nc!;1-^s^"4᫵gW?gW]}? q]S/}9KQ E_9HH5Шه;_ +|X jar_hs|1oKo吽Úzx2w)^ٵc=ӫ黟id/j[r'6SJ<- ^qp)5<@ 57WejLY< u"rCqN岛1_s'])%ۼkٻ6$ % cᓽk`'OE*$e[Y~3ç4$E ER/q=տǎIrnJ >I4^@U=_mު$F%QE2݊&jGN9sm" tk><]}4NcJShDCxA l$F0avوZ;z#ס}6Ѐ7 x-lg3XsS Ƭ&N5@FdB9K3IM}6]*!\"c.C r2oѮG}cVMG.PQ F"!3.f0*jʜǩA)X -558u+$6!` eFe*5(K)µͦ#7z5e-+"\vFFYZ2>Jo('"t͞H:|%43-6ayHFǤ"(0T4N+DiH&pǂq+5aFELHp1&1jsav0b2"oNRn/?hziDkVQJ M-q S#rN:x2ıfdSkڍȍ#(Zaq0#( Մ7$2#Ϭpg.dJVĽ/gx>י`&8ּ iɤRU&i>~&BVY[9@2A]/nϗV lI*ČuE M?)"$µuFcםylo \85iatH0Bh4%uEUz"qP O,>NIT+!^BhQWKo1HdQ+r.aij`ٛ?n᜸H?QaɏœY<Ts]|'FcMՔkۏrm휑(?&!t7]^bՍqH?D#O ?C&CS6&g]-&&\3llMbQ1s7/%h q7$>dY~;\EAߖxܵ/D"E,ף@/o,`Q^-FJ $y,R&:/%g$ \9A^cGLJb~#aD$=a"by 4SS&J8K-C."o 0Z [{#!6[٬/ 2)W;]GjﺥI'pU]:44y) F^֜,+n(3S<`s-:cʠ2Snm\އdk&h_ZMgg6 )zR`$*Frp8zƌƁZ@>sWT&@k3YBC8ct^ ٬ |!m-E21KG8|( H' \>Q_wk?ү4, 8H~1"T7sx0i/sR&=CdX_˿;)޶<_xL3NYz!RNm~·PoOPOp6WvW~d#vu'S8;O\bwWd~{b &A.ʣ/MQs9TUx1W'w}j${gFn#ޔWp==` i{)0<\-3?޿y ɕxN֓kr12h}(dGq.3]`|_=@zA>lZ iB!S5]Dmoe~ LyeUۯ M@dɋH E,Ri5 TЖP*fGR ex~KPYVx-s2$S:JA1* Z23Ř Z] aܾ;mx6.oGM(r9#JEݤ`vCK;Պ(}+Z+]^ A8 F0!;)VFEJV&e\2bnP4UG ECGudXDpGkD/!ɒFcө#gH|AB yjVw.a.8Suk5E8״qym%1U"橧ބA?~dM( OHԔp0+Cʘt1r6.Z pr.QL6fa=B ];efͣ?gՃF T{'@\Qou(ZO8eSكS}y85١Xroo=xBwh}~ ,W{}{o2|N*.aU K%mFWݛWeZv 2O|W뿾Z(ڜ-rm| Dcr{8`)*94eXDVԺE \`|L*VBE\%j:tq]hշ#A3}H-0d߼9 8NMW7d|ۃW=Ōa< 9q(VׂœO vl2KY*)[()f"$(ԉ<rʵ$(GudZŦ=t\TÞ춽ֺm 4WNhp^m6jخM(Gqm݋WG?i 8" L<J"WQDQ%Q[7iT+UoONzGـC oֹfBR6ty( 5Qj#''#\1` dД-ӊ(cA2(: S띱Vc&ye4zl5JD49}*eKkvp7Ônum S-huN8\IU}&u:ft]›w=}ڹڊKZ=ۘȕRtQN:1Io[uJ:@q19Ρu1:/[?Z^׮^͢ChC˼ͻ]z5^yl\R77y:~6Mxz_؞Uk:.qfv\snEN59,wp&_l) M92oҡH5=7%&7pk>CRi a<vY,h _y=1UA% 廸.I_܆LV&Rox17iMHAˡ\>˹'t3{e/4mе]%! Ny NyWpjMa0;={-Rh$tnAg&ޭN s/aaR퍦P_F͍")7W"%i^BU oR_ӷ٪_u**EVRگ5YlUX$.lh2 1{E_m-=tc[FCa+봖q6jHa*|< BK@!BD }BFO!zFR~ xdɇT<2{oR @wӈmuuB eH0@_N [{T{$ڃ2Up`}`GUq S[lI'{&, +Al}nwgT>rړ?%1 t MVa(⃶3Jht*6#P~k2oI{pydzB0Ba B!(̝pT>RyM%%gXN:I>Zz/+h A1uh|ͳZʿB̪PDTǘ} !8VciYVV"x 9tEFR$j](5) )pۤ$Ͻp Uaa7 ㈅ o]ixV[~??`NO/ޝ̮_8bK . d,,AXVRX!fgEN[0hbDIh6d/,p<`b Ur;I{(5sU9`Ws?byYI1yǾvQ{dwy< Nr`G֚Mp8FUf;mSf;u,ZIBHNɢްA R2QAR]QYj)unù~5 b7ꋈN8"]) 8wsIZYp@hm}]MN)aZK1]'@$7fVꃶ% g@u9K&m yʽuQl)OkݼdW\4qQ8]oZJf]']Jj$liU6H*L`19ӑF\| \<f7<|5YQw%ПKFQ>lh}Z|!r9]SJ SS0YOɥ(9IdeT[7>ZYQU\1*́C'ֹ% \v7"HJWi+P1YҺWƫEYI!j\ D+w|aCš\~48֫]kpNq`o_*&h~W',nqЊ OW0uý;Nɟɏ_~g&8&_E]X"8f_LD|r~X Xx+z!" emhb)=#Dl#k rS玮kjIch q T24%еfT}HT Ѱ` KiɚRZgT0J']-q#?ex%C,rE_#-f!V~5>jG <|4~r$x{l<~t˹n<ʻy;9[ ?v c:{ӄʘ'M8(H9\o?SՁIo c=%kT+#ce|J&ACE2]5bPŤK9b"U TŨ)DKV84 T0Cdkjrs݆s?cLjrf/]dۖpo=8Ϸn೧&37t3` 9x 8%TVr`򜘬IbV1!e]wA_H!Pz=~G0xzcNcfC]ώ:3.2jԴ|B[\(U(Ҙ|vߣ*2P 1;>9fE4|3[m.O͵ZK[KM7bЊyR%gu!EJL&D6 :!ur8P(5Gf[b.q}k@ {U jKDoNhYcQ>+LBPcgvm8[˻0)8=輇򸻫$/_ 9\Jh}>=H.?a3n#f9CsHC<:ɛBжİMQJ d 6{(*([`hIjJ!jl9q!B9`=lXk푾 [:8j[Cb=nÈO4l''i/B:jƒ֕6/A!>As9Y( q>squ}\F=#4 | L%k1%SvMCk̒kɦ rGnPVAFS>m~p\Y9}c5T'2GtT9)^De" !UM\$ |&2F}>`I-6qhdA})ٝAvK=@":_8kc%N'(|4Tp rv1LIB]^ aa|'Vو6Fw5Hkl )TŕƤ(Boz}/ 0 $W|v65Uiyy-W'鼜΋TtT?M~ou[}5.SEű9 ;ms[r˺:n4KM.Z;w(&8tn5eӓFn˟e7q!o vz_ڮdy,]- e.[פ&^Nj>la>կ-P8-d} Z=M_,82F4$b`W*mOjhWZF~gAnDD?6W7z봉~vr1f`pslmN1_7)Ij|?+5zR.&G'o _}~J\.Va*EQ5; fUTsc%bԤV;|tYrr>r :#Qfq402RosބyF"„fgxc$a`dZS%Qw!ߥOf&V臟OBX3rp]"Qna?t~P`"KfQ MYvPFߝT+|PA:>N$ &c wɋmKóMSRݯ4o/N̬~Z*n·zEOqv3J{ d1+/[on 2tp08ɀtj ryZwsNE.cK ߂6lB OYJ;f2Rʃp-*^jFu.+tZzt~D!D e  J_)8hgi|i C-0QzGA8$0ݤk̢7[M:R")9IEdѵMafXTqA8"*H&J8 ʏAV@g _Y[_r."Ef:E1QC Ty OpatO/B돦:0RPHBZF(tbec̈́.(3i3(ׇ ԙShw =ണPP L3@>n;#1Z\* .]vm !R(^˽OO9aL#k5 \_[?QGMLÿn#Lը"M/ʚY9<݈ WݯOŃ7ꃏˊCd4T-2q4<(]reyqYo0~jIƙϏAJ/$kZ칍5ƣ&ϙ8` "gd5#jdزwڱn5E` !udT)ʭTieQi%Ú鶄2JqTQ0uZGEd(rbrX+B"&:߲W$h*\R)l`KKYх?5'ut>T>zTL)kwe1UE `"R#hIQ9 *h^g~籪b(ہ5vU^_}!}UgB3@BWU{g%}g?`g-9^"Ю-VΐO.l]4WHC&& ?}osft|Kֽ2%VvI.vYW3Og`E$8]t2ؕ"(+F۝48YDܮƷ ǷyZDm IB86>pճۛkIFM-σBI a$ӈ($qH,ʵ*0k3o'[owAVSs7_s9́ZbW6跽;Kw iiߐ¨s=_2j/|vq1" J1BS;AQ;QP۲ dk~nj`*n6"Q6j ѧ) !8s}!nC 1 z^~z}9,zu҉_˯e,#aq+ ؈DR g6ULĂĠJDzph"c  @ y:ڡTx j\S1Sy. Nicm/6KN7 4ywѮ_*kd5`Ojq Z 7*㠼Ǡ1 /LXQDQcg(F^t#x&EO%[ L$;җ≁AA|Ro\ۀBE@p6B0$a>!!ə!*8%ubb|VnA* ǬVTkrG 滎Q[ޛ# X)AVnu+7:J_MnAZ.RhQƪD̽5}";?ou*W6seGeנV&/ٯп?OF? kѝwe"\QmPWud4twkF /k`UXۡD pfZo 7% / "mG;LW I *S ^v87Ld1.|r}]D1o&@~*赭u(7A4_]8]Bz33ό-;icg>vjgͮǦj>6;IWV_1#M,8 $%_>LƳL^gt8$WXk9AR2)yU*Zo" 1)+ezY^@av~5'Y%8h#YtZE  >6ZNjdeJfI سrQfMD-{崓 )bf[c"`s y*5 JBIU4Ѻb!h9 "if:ΆtVoπ̦>[5]xZv֚XSܕp9,VS ("S_J)sĀ1ͬAVXmGqPG{pbˣ=" |*#X(J\|8 Hιȼe!%m-G`BbI`B|Pd[|NQ{KIWs-AF BKnIyZ83r6䑏"R4<0Km*Ci`J+g'i#zq$ւS$\,C[I#$4jd:jU)b+t@Hmojc.*5X@U6j ׮0gRn[;I)-A>tWynvEc=v$Ք];4\IZܛp׋,z$"| V4تRR{Z;-F':`Cq6"y%3;!>HQbs(uH{Gx[iA&8JI歨NݩMu+9_bRj.ăF T{' +-;VF 2y*)=\Dǝ<8P 62PrgтA& 5@TѡnlMkg@ǃ~ݳ'K_yW:lʾŮh-Dz,5S$V$IYZ̳ZL$ _uʽIz /^:#^o8Rp`;L;??#nR~ څS@;3rF@dQJElPY%HدSK/Njş&v|Pջ/`*m(t-l}V y,W8xÄuRLR6Voi2ӛ0ݨikqU5;7 PSQ1pQQ,%QX\ǃW<(&} S쎰Js3 +QLon(wJG^xǥosˣAx& >>K <^,V 9qSX.aq*=)E#2,(!`0ȕr cZ*"ro0CJf/hl/ggR_>ts[^]^M؅/4nȅv[Zԇ|%Vw{g VS.*1yH浦ᙾ 9//*;נ֎oE{OTs5IŕK%RP)ә6;BVX󇏽ί5Ή ; Ҏ%5ar* ӒiTjEJ' h?x/{lǛ!yaU]Fc%ITj_^j ~i16Yʪńnɚt°Jܣ8ɀ. dUV*?"*>~8-._#"DROl?X `f` /7Gb7kWv8;AٍPp=8)n>[-0Z+z΁w󻪩pRA'JҶkлO"] E?|MN[37ٽwy;Śxե`9tP'hɐB娎L+ɫMV]u{dV(_hr_; a4!דRr<ʹqۋvΊrK{jjm<[ůZW|_a3./yTrIm)L@Tz%a +&U& DszN1h;33Q)K?8 m]3 3J^ d+ˈ*T\J0e"s ɼi^goGnDŮbEpW'?.sᅊ & F"!3¨"A0 ,<ļGggCit(JwG~ܰ-K=1RJq@.$EFYZ2qjyJźJ/:|P8yWYH< HitL*"{ 3âҌ;$("*H&.uݗ+J'79DZ}Hn6A\D8tcb:朁K,RF- L$Ǐ!=׊MRPH>)6:41LP90bec̈́./R^B&G(Zav^D4ՄnHdG Yq}5x}\Tָɫpuxe\xyR5_3SO,Nc >5\z< )L&c|~}?ot(S8ӫ ׹tٛ߾I?߿>Oޞ~ uvӳo_V`F zC\Cxӡb YYO|qU)7{|8OevLVD3t 4juRs! dr᧯@uegXITYJsb!|J1i1rUM>*q챷GQK={IWNP!טXo9H4X@|ކ);j_3peG_{,V?A60='RK%q("EC" 3d99#p♔hɧH_tdZWKDzoiӁ9 0Qys:kDe =,36X2tie$i9F KtFDmtea_*3p$* zƌƁZ |sW,eqT%@㯛T%-8:=ct.B1O㮯 E:H7(^>]_A&P GE6,rUQߴ|}t71~Nz9ڛe;\=[(Ij3KY: 4U>Y $|=DZto,qͯ>ʉof1છcg-v*X%gS4a2a~"Omt@D]!_S-ѥKVz$JH0UT7SsnؽwPJಾd"П5jm霶tN[:PKDE`!DKudT)ʭTieQ6U kj|e=ecB)7j<RȰ, -J)Z$B"ISl} \ϤxN L :mȅ?}:Ip @yeU1ITkkB'S_#SJFⶾwX_+8yCl-Fo;(]Kuau'=4[ΜBnD+2͜(!h7DŽݳD J^X 9Y+@rȰߕ= ;+ H7Ӑ[i(C^,@+EUxO'.{ GW gi:u5{ǚlL&):>99C>0%K_IȄN[ojK TUPLizs֪ 6Xt9P}2)9SPw*hn|ȧ8;]|{LMIn]8 3ԓ1l9kz7\T| ˮ 5oϺYkfzw꺿gXKtHvAf)!;{|&o]޲,uCѮ[wܺm[?|{lMWm>;fCnqnӝoo w^k>TR<|ՇKyno̿ƋŸ ZReMwܦ/-tJ/8%V{=r;m6a:SN׬[ܢxPwuǹ3њ(v-NZ)L)ũN\GTNsDMx;9b)K=qHZ1RH) 4bA*DHEeOMC@i.X$ BH!e_$#$ 'm}8zo&Ύ4]M'_o&_^ :>w.{8VuYmp뗯ݡ\68)"Ȳ$*`\*(=$@eUTxɆ&F1%~lt!+20R(6$)m$ ((E>γLFb]A VZݠ5 /]6bvԦ#L,}[r]Q*"ed)c)ƄdFIEIid/.jS9fbFmJV26DZ8`G2NaM8/*eJ)^18Wnc%L֮&!GP|)h=O":IZs(2Z,jb/yzeګުS lkUybi9-* <[%t;kW` 0NX+;j}UVCpy)is=mr=i=kCuEX RW,&K^H9Q_-%~$3,=QiRD> C]z݌5jlgx<]GijxkVۥf'x>߈~>ͽy3 Y1QIU:UtLA"y:yv/]ol`PH@*[A6[_0ˀ)Iuo °G[r<ށԗѱRuP>hF:asSC p:f (.fU֟R"bHJڊ1A` PPoӞ3-u}Tg/ѓ;\`S)OH.z-UO#pT!nST~C Y٬eo3LFe6(+B 0dIf5%t[$ mʤH>'[ yF-="L5*Ͱf싅Gwr\xi$f7x}1y_\L~8]pfLɘ/hdV9ͶHq uPp(f5Z$@ )j .±Ect@4ۈVlG0PPv jjCoDTl6#fa ʅ8坐j"Һ;7~W1XaP 82ף)C" FcC**A:a3qa/όV`<Dl{"'D|Hu0@V|UРWJf`#d]_tND,u,@)""Dr!"h2L(lcHZ(!>LSgLKE'\|HYe3&=k`!)6"jbi8"Q" %DI;T iǡ&i>/*YCNGZE_`:U@2P}Gi-3_1ٯ_8lԥ6'~υ?|9erd^ᩏ[gw/<{[A! !& ]4p1烼7gk+1jQ?sOoדW~vD-8\ T}gs}}jŶFoGJԫ tYowm9zuYqry9%~K7ͅׯr'ӷkXT~WI˨k0sv +lr鹖9˲/glVN͛8ԥB_y4z+`V-),f,?8WQCY͢X#u("F )bbfb&K:UR҉*FSZ\Z*c*1WUʵō\2 =&b>bq :\U)=k+c+m+KKእTtW(%SejCXY |3S^縷|3mkr?Z=}Osms-h.oT_}ME7L7-֤Ϫeߵ 邀?&S뛁lpZVރr1J:T- Ύh~lܣyoF r6N_:Iƈ(]0J{ ˀ0 5?ϏeJrOKh]`-.Z04:KXxbL$nwUE4C!->2AfW=&m(Y(Z3 hzFm_ t%FzCd$oS578uv-_zItS-mQ>ulOQ6Q(-A2b1$tS3XilKF`ѬE}4Y,]`>#Ť%,#L& Wј!a"툑d` yK̢\fF1b9ԅ?*Ì*Ai(y 9)QA:0 }Y[Tڠ@~ :X©rltEiaӘ3 ^B\ӌčpro*夵¬Sp*q$t %׆)XI@ f5q?XoCnRM5Uq4D- c^P#5g BE!:.Xzꠀ%[ Q\ yۨO0]g358F'R!lzAN,n^wlJA!fo':Zy20o>%p>9h-`Jx&R̨6Ž|*x6 ][>▮ - XmXڈy y,Zq:kBT"V%PR J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%ЋUfSO F3%n.J B J'ًTi\@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J28'%m((;CUzJ լb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V\%P@z3% ;*V%P:jJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%Q}t L~hW]}{BGMi{~{70Z]o/"(vbF%6z>%kfˆкx%B4 ~1%}ǥׯ*g{6<3]n|fZIgy?J߃4CjKgCWɹ 銴B2]@RʹgDW ]\eBWC+B +- 6xRog[0V.Wg~Gf:x.%RF-z]_d7|3^ڡ^{|mI74`C] .P4( y¿^*n]/߮ x2w-]~.zDz1Lm?s'򣕰&8!MM /Vߵ{}h7UV э8:sog>fH_“U 2gphEGh- -x|u:nZ~5/-Jwv/g3`f,wFW&Ztbelw%ֿ~r;~vVkTu[eirQjZr6U#zF(T%jQH{Mwnm{zu>oMy ql⋝B7Vlu(cq߱"T_y#.\6vT] Ԏ/ՏQp*^Pn;/I{Dx2}wZpVp$Ï54(wo[ZȫݭB)g>[_o{=uu_?;H>[T;弧],Akb}^2?upU;F$ ~.->ÕJc9}٭(5#"n>y s+BšPZ"fuOv̅NWrgЕ7: 1#"ΆnMv P**+6J̆͝YB Uj9HJΆ$!>0]bq+a7Ͽ~/Ag|g~hk^("mXtzQٷ"n@L"\?%@=P*N_blUrNw pt+hstE(-K+xr|5dZ.~GiaѶ楟>J1fd>GOmJwn8v"]6ֵ7#}77Iut^^_*y+z-%ĢPa(]ŁZ QM9QEm4_v+㕯t3?_r댯\vuoTuߥ]G׫:ǟVߐ~qy&\9/~qr=lՋm]7ڮ \l9޼~#JTTTET<-\B*Qz]|%HSMU IU]7k\\]:^3!u?7E}ctu,?Nʭ 9ml_Zohq||G\?''H_~?o1WM>Oܵwu>dY0ҊElFޤ\=َ'6hեmlZVwl޻AF&d+u!QYh]I}E\ z/SkR)QX53)PSBM-䛲O5'|W ))"EbU'KZ{eOV^ziD: pySLk[P>دFaXx>֋е6Q*h,مdyrQ%V}uUR k?C~پޮ_}}++)u*|g{ӷȟ%ؼCqmT^42F޵'m]DZsεtQ0v\R9] OdkO ^ *=$SV7**[n_w?v={oam;+qMλ:V>46M:kt8{5cU>H›I¤[ZPYcBҭ~ln;clQ,ctw^& ΧөYr&ineߞ+Qra~ IDd[/h}!^c6`֯? sòT"ɠrz=< jd{Ӡ9ya팎|g6s9vBh#; :vfJǐ8m1#h1Afw>@ACj lC$ӜH1~"Ag.D9)f)c%tFjDh[66砕B[,*4nIXZjY"dj\K9JcT]}WzNQhó(c'sRU[Ǿwe;}6~{Kt/d{2tWlPlc J/0!&% ڤڌQtK=ho%Udi&IVlk^2Nӎھ;7PE˓4޳;9799=IHZ^ӶT u߉^NQlFo!=/8b fd,FXdV -ۢKҒ@#]1m7O=;3 *l2wJVAȮY ْ|VlGl[.桠v38#j vd&%6vOR8ZSa{(ڥHR5UFvvz!fRLL"Y lQA\Ljr16flީQLǞhD[DqD5oym@Gk cPu}] E f(!ޏh{$/EǾ1d(IRqk&KV:E>%A;ԩjَ_52.λi싋刋#.nxVcN$(s1 *+ +K`R6H#x(xL;a?<| {x9rj?56'-(` v .J(H!] C|j!$QIYsϕ;e=[ZLXʦZ%v L#ؐQ IPB bN^CcTK̮Y !, 0lN J ؗDJ$:5"C?7t\+<,$ͻaYBk['Ǒ;"ē;/Oo ;㌯EաY)oLH^Yzڽ:+=VFoRN&b9Q SylD(.ʦ̴2e ȐSL״bLBPM9)(Q˘Z7g;9bLط@ί&|[26W=me*N>{nN3O[ 0 1T<;-B[ KP2!2p1Gmb{{#p7Ӱ&)[`\{yߎ 7M Xۻ[+b~|?9L.0Qџ>SISoPCt$&OǓ~+XD_YS_3x֛CrNF%M(Ve#4,TNH%. Z' T)]Pl,D "鳰I54f11uK?)ru+8= tYnu悾N/$`#0|/0Z83h*'ISBD%ecIQBZs8"YW +3OMth=vHGϳ4j'Ec-7W |NNBAf'P2Es~63-=kgD򽓽J#ONkÍwnnt}GC u42sNW p1;ëÉO,PqUelڮ[ʚ]݌w|&~Y;./.Z.ͺvrme&1r h'b@H#b&+1'RlIJ]I!YEMCo9Cl ;pesεN LB-&Qb3jcm`WΔ2^ЫnNˋCL7^{Ͷ=(_?]P pb] W@:yJY\v^^ƪny.7Mg_%if續{9lj_ˇ|X~cL`T ׇɰhLXeWXɲ<gq9;/_`ǟ~,W*.gѤ0M{uoVOX'_kQk jO$;41_|PZ) Q ѨKɲjR+Ԛ"M{!Ii:H2\,j1 iQ(rU),:9%la$At:Z֝*gz@ JIalUAORs;FU~4@t:J̏fl~@+Q ]|dIzʼn˶_b4Vg13TY\LgG>ϑΦ]9?/>I1;O}%$H?GyQ D6TW | :L"^5)$+D ~)YmU5ȧ3Y]\M wҞv3Pb|1sKUF*Hg)ȳt-Ddlj9-|6(#)YJ I,>1Pd$zo8-Ax6.ʺ=c־FqƧ= ~YgdgC1ǓbebWoi$et 6C%$fAS";OBUօ4,(5X dHrQgm U$KpWEDM:M (sa`}Y_4)$ɔ5er u&#j6hA8Kqjb8X[s/,͎cͧx^)j>7Uil{gkh]s"1#_}!4 RBTU\Uv QU) Q} V߸<7z fuiQ2ElGL0;-lT(:xYwiQqJt50'OǓs0æ4x+=~twCsoȞYn{^̽x'6(ݼiF%KU}zyFmP$:׎*&IAyLD'+mH@_`3R՗HfdC O+dDʶaW=Kǐ5Hk"gz5plKp[5ɞsxֆZOsư2f Lׇ$XLTBJ+'KFZ+I-iyBp h öV'4}1!85Ȩķ#y.4C6F  iYm\D"FЪ)Mm0צ;ctώVh2Qn\J` ()198"#;N y _#scK_=FS*>Z )R'JUXHڀF+JJDDu\d P +Cb:Pt8. IlD4LERV{/rcJ Jʹ UM1_7-M_jiD&Id M\dhc(Rk'9A H|̥JviH#AI MJiC p0e tHsÐFLOنq_do>xd}[?8KדcR?HOC ( s(ws\˻E,q|`!C3qQ]~p Ea41q?! G('pDϔrhx7 *7-I\qzBD0R2'7{*98;L>2!TE.M1ӴJ /j_ F$y[ס.rQj1Wk\'qW 3*P=csue>1\ϽAw.Gp: _n?!M={l5uofYޡІ8V `~y7=^ٻM@cu6Ȧ^ƊLǵ:A8ܱ@禯DZ"#zHauc{'+zr{sdoP=w?|wӓON8Da# $$={]Gt-u5lе6G=}rCGB|*gKn%@(~ |( ٰxDbլmcN\#H6e]~DE_R%P>xG*UM osj{tƕmkQ>} H9NM>(cCw[ţ^rL0b b.8*mdG۽t aQB,UD>#. Xtq<^T0e0|Jغ_ 6G 3\YՕLUe;IA0l B̩*zaʳp!.NR 䣲)CKs*nCKum}ڦ|mQ6/e6JP6V <XC#whA2x`xz|q)sʠ\=dgkXll~ENR`Gŧ8(&x_^"?be%W7M}=7AMios}{\5Ų ?lh5Iy=91> d뭔{/l9K/Y54XtwV V{!Sl^ٳ_8C؜ȧrr5귱%wM|NikͬhZYz_w'ܽ] .Rg-ךROmZ 1A F8rŝӮhխ-2J ;gS--\J;L*)/`R\I987sW-KcC*1Ʀ쨯*bvO}جquVVgem;k`ct?=3z!KK+:"tr@L蘩;@WhgOJwR-x$z"d8\H)s$ *I{I{z)rT\ YA񃊆 I*Zzb1J&ijbĞ(w[ %QMn+1rmklXR>) NLnAæAtʇnV;rj1gpo|Io8f,H1ohmojaq׏ix8]KFbל5gßݺ>{|Q ڏW/qtsBmņ8$h [~,I~$n5QvK&()|R#/4%yB/~xH^uG9DsDf(-<"Cm$Tp\PJBSQUَ2xar8@e<%d-F t|^Bqpgz}9bag +Udd(YT DX".xmH]euɕj_UVZTթW ɺk7*k+$ *S);5+Υr<8HT*p{ jU$'ë*?UiFJŬu@{AA㢎,Gyrh; 7 J>OߍFJWu!|{w&]>z54mͤ8~Mmێw@|`* 3 2^@"Sg89⢃▖X%Oh&Vaa^ĶIfJK10 0M@/oP0Ƅh~Lj㎿a7;,֚cI,$le펨%iSA[)g31Hʰ/ɞwuRn x9E?.NUo ?c Ї+|_\ 'B3 /?CI\+xzqҤɣF&{c"}1Q31S%4Dp9H؎0kE$NV4D_rΚC[EƔILd=62E18`yT^5[*<ы$`sU+P(I!F'#`A -;խީ9ӣao7N SȖVLIHNEߔ], p+RҼMPV҈\lt$AːRYDZ  NQtw b2 sP)c$pP8kଶq. лu9_AQoYVZ#g(}ݑ' (e]MV|I 1"BmeήDc;_PblPkՌc GbSTd;sT'f'*Y [BE:# nQ\UQ%cX%D#'\ygޘT@(j^medld,cWj}ݲ+Nhcʉ.{B -TQo4)qiqⳲ@ Jedɴ%X2<+]pwWkOVnѓOYeTz.R)xIL4hd, _,XRn5 =k빱h$!Jh$2A3⩣FksN_"<]9ͩZ+8/ș }1}wEhW?J^;L%P<3 kծ:S\Z~ RHݛT(b)}pBBgNϛJN4J;ʕ%1qRE9+pITjcr4rG-<-&;qTS,ss>OYpoЛl%I[jA!)% '# ON0c4^3A [DFtT$HBS(Y8RD5r jB]),e# Rj L~j!$ qƾX{,V UՌ7/Xfyfg>`ƟNoY32Jb!,FV "Q4,!Tji63V96h1Tc/d*FPPaud0|m;6{ gLITDiD+rF8k&桠v38=j6'7"XII`5 Ra9(^۠sKR5K:f1E 5ClE'E5"H i( \tj!4fE)ZsNQkfJ;&E-`=2b[="< "T^^!{ Fa&,Q;G1k+I֜YKt%emHQ}&E/b=܏i^OYOvh7>?[<) |m%_xR@"rR$*pMLN9rR)(xx2{==|kc0B9Ťedk4A*9A yyP656*O>=gOϚz}1zeeY,:; ְ2Q(FA)LjY5^IA֐<jAfw?Z(UzG)E-0ˮH_jU/ f8Ghq% 8`CfQު:+xza|дfVh=4m>K/ ~?]ˣŧ!Ѣ5ևpy.dBpMXK]Ȣaàpϲz5Nk+W/wp,aĄٸ[DIɋƗuե~bàL烓هx7<<s2^FOFiG $]cttz7pN1b֐Q @^6k Z\4EpϗaZ bi6,wMs+|v:O5r~(`5yy5rɴc󲘲[Xc]o/]~O1Kfg77<==}t\bwwVnM4ᘏ dSM$ֆ=lS827n81q))J5,M/j|)+1i~/W)a~rKػ#з.w[x^]Φ/Pk.uB\=?` yb#EiG2QITwO32#SVWYe"g0T7D7\],P`!!_OBCogo;<qk+M`T ⨤N'?^t' ,_`^3\L{ 󏃹j ހ*֍̍Ɋqy2XX4/=[ZAu СKC$Q"R|\93Pz8$pjJբCyjC)lf$[y,ƙ@2\ dEDI4aВS;U),lLR6AEVqH.z!<B/H1mCr*٤/;/)pS G(~yDuS\k$-"JQ%D,Baqzm%sو`!H)3Nr@# l]ɢpj7eu 9h)j#X^SmY%([}GuvP<8,ʤE;&8"댉D𪱜5#gG9o@̖GN[f,3%"+hkq{2CB:T`%2B`@^Z _ )# g'ţ<IhCY[qYi,hKR1>Gx_wĻʇ}SՎdd9) HKlL:J Btj׌jCfZB[r}k@uۻ4-Ey}|It^3lT|`=9+iG|6KzorDEb|:aX9QA)ou^9z](jQ\K֛ĬX1bp:S*Y!$SI!bd,T.>-IZ(cfA JQLeJ oj\lγ|:>,D J5FPlXHP4栴,ؐ6ޅ4錆 aC^w 8[V=moߴo̎9;޾ at:aSM bO PdSFHHuLoJe4hHI**{iI[M@e3CmLΩ"c妊/|%fR'#ot>f sL™x Q،]ڒu|HEѝ_.uWvyY=x_ۊs'\{ +>۠V`U|*XgūWyމWr|G-R%QiaU1cJ>{zy}~uƗWC.`h*c3̐XncV6f;W6*fmYttapuсBs(u2>Đ nsr%l~@*% %]d`,*["o Pp°MB~gz'*#:й[ہ/֠&nɿ]=omǙwi ك1*t r@j;ASpߏ$_9;8=AWjj+Sh{+y\4M%Wa(5"hE:1Ĝikٔ)Js 6y{mڤZW!ǃ?+?KW նrtѕr1ǟ\2ǃpM~SG]8Ađp:`GY{?w!])FL3Idž@u|B*GJGa:xh:cٻ6r$W6e'k"ٞ~NcԊ7KdrزȪė@|7W0 16ݣuk?.«g/^o8Odl9]h:ro3<ɉ=] l3˯,)Og 3XgzW}ߏ/whg{]rW;L':y-#cEMѬx)^l?j$iT3Oll<]^_X?TͿo^|ן_yQx` @/_ѵn5ZZ:kE-Gc-J|8n܅I_Mӏyu_֩'j\zzk(aY/]Q\57bP!ٍ/AxZ˳zuv{hl$qyiNF`}8ͥF-q %!Fhq6HraWdz6]◥ lGH$9 Fh0QS ̒pLWiћ3I~OzcՁX5<ێѼ>Z2V$C9˯e|FgU ^mz6Q%U^tYP(B5\WV$ &! !cE6c׼]"4p_6C:y5lfW[I&"DD,ӥEE&KVұ% ׻2P.^;ˣ80I -RpGOgy:ׄhtigo.^>}7}4s}Zzwkҵg'CL˟/j@2&Wq[q*R7m'~zO ꥱ;)ǫjoN+D8=u(C`hUK:guZ*;$<ލw r * 8Tw|԰\<Ӥwjkw) @*!*M\\sRwHC֘*%ĎlH)/ `z`>y0߽DO /1ZW7o|ʬQB-2-0̒gTE OQ$;~3QK&UZVD[ h‘/)Ҳlf Y)#Q5vi'//WDbk~}Ib˳=[uy%]-]Nl )EͶdQVP 0vWiDPHB)ުi~y$Tw%dd %۵D%9%N b6,46NVq_mEǮ,',%J%NFLt:5*NFBak$0JFwX#@'*Ҝ9uUwuUjPWߡҭ2`ɨ+&Wq*RRiŠGuV[YT-+wL22~&!!@ 3?.f[&[1x{uj.Ǔe~f#iu]}y X*wl[F>``5[7~?nK(vSB.jЪFFws:^j?f%T-~Ǐ;*Ao@ o[G`%_Er[`d r%WdʾCdRP{D 5=VN]LO;o_ v]M݁d!2v:~ _ǫjۖ4u>N%\Hܪ~Lw>rh& vfNֻOƮ)ũu #x*yOwzWsY@!A&|-YAE2ؤ R Ix|u܆O'wn5buX`ۅEp#, YLN|eUTxɆ&B, cJm$sd FFbAG Lr'c\+9,Ե!Bu@tlaJ6_㭛Xӏ;LǮʺ`b~Ң[2n,,v}"*c 23͗00Z& tRБws> Df0ZUٍ댴yko4אyH|ll7dn&N&'y{ -F hHkDo|~Y9l 4+)2Bk%C,5}3vPgb=b~}:zОG~[Yh@h\"Y(0DdĩXxre"C&9ojg࿶sH{͙3Ϗ)2>; giv]t"8@'>#/\}dA `. YE9#Jʝ)(B$O>c[L4܋ MBH@*[O !f$kW`1 cvAŕ``̽ Os<_ɠ±RyPP](:6fB*VJREXH J%m1A` XPsXPۋgXZp\y|kOɵk *w|c+qU⊾>BLܦZ#2S(lh`Y٬eo3LFe6U++B DȒiͦf֔JP3hHzr* |N&'-B(L~lsX3*ta3θ.t.<)]x-CE;ЛU'yC'٧x~56`BRf|,E F%)Bb9hEr bl4Tc'JjS4%.¶1a\ 495vMӖ݌;ݠ3㍨$# Bd4 raZ,Ny@ (oX(6c   b+:lXפ QcFBuC**Ad:ߺP3rÅQvǯ/S#*h qЈ8^cI`Z Jypd sPuaYd2ᦋU@b8+mLT=RZmlI#M. kfֈ2zqNҧf\r_q< />,^a$BR,D2՘Wi",P!ƘJ0Ћ}ч͸/A5w*>'23dZ8L4`11 1Lay T9 &WT^[C2.t 6(Vf>sfԽi=i|翗f=Ճͳ|F2\MҢc4c4T(!0O7=h(G(.*` ::7Nȉ V9hl7#Udl)yWR2wYm V5tgƎ0.(_B1`. }w'w}У>6΁&XkLQ[%ͦƚS**Ub{,TH+ KIm7/L0L1D"Ǥ}*^f)  fr Ƃ@mCU֮9"LuhM/”[[G`zywCuQ:CDg)(z9׼8*`*I7m'(O9{ltd IKvT9lKS/ISw0rxiPuUYT `!tha:F,2l!KN6f)g قQgo1*_˞btaH)G2kWT"ը.U(`"VD;JuKɨ -Es>BO?1R\fp"^U02)HBQj5Z޵q$·`T :s,ac}xD o"RPՔ(y ؖ8ꯪ'CbI/^$)E8d7锍Q' ZiY>r7'u`KĎIO(bIOF,Cvez\(D,aTG!:yE!1䠔$xo#]cF> kXdW+\mrqM.!8rEp{H'2xUFzAR'ck1U: Wb[t^ycvОJT-<IiD?$t&/l"TG* WBLRʊaպ VN-Gu C.M=W4ԳKm\PP q_s l Z09C{M],6t7gԔɅqg\Ol4~iH`R!Үl qCN \"f Jqk r3l"B,Y֗%[qNfz hxdw hct5l`B9O ɴwsc)$0!j½2"&F h(扌6:. X.EEØ oᝡ7b̞1Z oy)2pVP슕v 7N_QL!P"KcBZ.PbR@W(,(䤼;7jq{uLq0QSJHWEu*$AuoN5|3Sb%zirh4{.?U%>Gǚ맶h桿5؃'N eʰVd ¤`(8DBF#FQace#0ǃ28.DxP2#@'ڢ=m ٲY_Fٔɲ߱\>}`/&6h]@;"3ZHuѽg:j) МDS\h";ɂdj {ƳN#E#d^>]v)%>DJ8iP@[ɹ:|!V : d_?tu`XJjDtInh;63;>hV $*o, DQ(MPFˠv jDպSs2ZBg'~c3L_c5qpq: 5E[琓d357ri{8))ң]7Am/k^6a#[ ; ku]ܴ ۭG+ٓ-j%wʔc4+Ğ>\Zzn/*U>}R-÷gM3 r0'i SxFD[ 7}Yt?׶?Y17n8jR7)yt|>M/m&XxO.~@'r 5oH{~}RԓxfS]z4߰05B=033uffgZ`x۰~`{sf`m=q$a?טWjY]",𞰝F;F=e]7Cd3z`o6L47g6CY5k"Dcj2hڀ69vUCMekͬ{HoKq.F}x׀Cuۡ;|\隦zppji])|q+ A** Bʥpd'B-*1r2"<>.Fgè IK*;lp\SO~Zd)>:BS̯~&WSeTrvU3tU*JHtqS\Ymg;kX0AkY9ʹ\(NW l4ZZ>Tk9_rS@BJK D5瞘NsQɗoVhjwoDV\]gMFp!xILB8 LP%N;ܥMNȄ9iٶٗp$(J1"[mMr=*hR /EJ\%x-zr<&u{Wj)s#L5IoǮVI'Lȅ&O 'Qv&~LY ӊɄ܉&dEA$ =`BQ WUU(XZ*aE(baWk8*!kqq,T4- @MF/R䜜_́[Xp 4ˡT,U`l `no ë/W$ ʟ\"bjlfs %+ r/te\b4L{nu lEޡʹ?/[W)5(d_a7./Ӛ&28ȡepdz誓ym+hK \=)wjAZR!I0LJa O1\s#SV|^rv>pڿ] ]sZh;eL}) 3,#hZMlk,U%I!J ДI% m7`fش}Ep(^ii5K3O;9ʅ R'*"xV'>PǘB!MdT[:ʉt P䩍2FwxpV[8QȄQ9D4Cv#gKaM_Ξjɜ)A4 MJNnߘ]8jz}5y[^ʈ7Ő[Z d.En\J` ()198"#;N 9ODe? sKT~pY#Ը/'nφ )R'JUR oTDD$QGE 0Tvaf3Jf^ZXj}"y &F͢O)Q˽@yQT"UI9 #)Fӱ}3\[65QL"khqOyJW^@5 1N ._MJhjtҠÿA"7[@D3Q? ΄ؾ6h?mȮ܅,[y˼'ߧ~\Xi}R$}r{ŷVWEƟ0]Ny;ӼWɳ؛q5rD4i!H"sđ.r>GGHݾqS>-j4[yݽٵ6._W~j.> czA̹Atܮs7 4^~@HLS/ڙ@m6 #}efBp'\ioWF|9I1:+#g?d۬-ϊLǵYZgzHX 8?ӗ?}LÇ/~z;_p&r$'M5װJ>lW|y[iپ_lq~s۠yģ5k1 ]\gC4Tx*IxϟFh9@ =Ҏ <4jMon#fXoFD2&04U< %C޹Ƒ[i46A5<7CVюd9<; }ݖK]R[]t׌aRx,J+c1>!+WycFyU/M|'!` * *bǵ !7Wy 0^`v [LP3R$ J)=ͺ:B] VEɮVtZguuuDJ^8M(/7s ݻ z7EM,{h1o~v}hټ@} -),ȝY}iԻb\l[=X_E&,w7Rh^MMcxq~"=Wx]8xz}y~A*X PX1ѩ "&%o*,ͯON~jCskjjiv=`l-`~x|}I̽9Q[EKEYЩD跮"ݽB2OzkQvikwK1;lf7Q5Oմw L IGԇ{ 흏&JC0yӌ7VtZ>#Quui̾c+%r{ %oҕnT])nhf0Z2vRJuu!9~qem_PYWGmj0`یKJuJghע+2U=S}85q@#qTW#ia( ZJfw]ud\t`o])46ةJ)'ެ'd!]8YیI+R-I⬫#ԕM𲂰0Li $ǼusQާ4 *76iFS״R:?k5-ĄVT}P0Of^us7PhdqfF36*27**p+"h@{鍤9U|TǾ%64+u]Vx#[JYWO`jHWnIW뚹 Z8u])Ь#U$!])p2 <+a*O%]% Ʒ])nj6'hug]}5N=^z〟bOj?:ss(ybA+uЪjS.5+ ]6ݬ#sҕvt](SוRĬcѕS(`6ѕ:׊@OGIf]+;ɕTx -G)ytWںjr x/䧲ɸ$̧ $x>m7(Z:vR(ibٕ+;UFh4+>BGJi9M]WJ̬cԕete9+oFW veYWǠgU1~v0袻U;TYfZbh==x(j|9pϟ?~us.%O(ڷw>VGidտ{#?v ZYUFW}E}[`b̥~:˛۫k݁8whwX#񿞯V_-;|rD"ʏG{I߽}ؼw{ݦh)7w^K?!|0>ҫ޽^_+r9 D'eK;f@9:{TT=% zߑ/=m׿:+ޮ)ARIN!g]%ֲC%mɎ w$ޚRg= >f=e+UA?yws_n7H߉ {8A~z]V_mYR~+7&K.F\B^:9Y!Oq):AKw՚@8{kM.zÎ]]W/ L矡(n;pG'Р })` $9'J%Ko2t:rRK2g ᭴"Ij͵"B9EJԒJH$RQ@v(.'|oũ6%+c|)նXұapzф`$<3ײTڦP@=`.4V" : gPU0AߥȟA>Ʈh*gs> !K(wAuP # !Ϝu{&Ҿ?WsO;YQǮjo K7" KssϙTPf){HnHe6𹳎RܧDnDɔkr}m::~FqmF hבcHnw[!fS䗂ItR٥@dlE{"OvboF1%H=Ρ&6z`CI=ܜ-م.xDl5H]pzfBAJT.va_v-2#JvltСo1H#TPYT"BiDa 3XaaJ,|狞J|XdPSYAX2Bmk06JYj͸pj:.Ax(8](Wx`tJYް댮fG*ȢG.wɵ jAl!soB)}8OXWʈT+WGbp0&Ttpe 8R(b$+]}5i2MW},ӛXlA7DrVL’H oFok*Ȅv%mfj0 'qEfa ek">a- Ȣ] 4A/z6=uc0n:RЭhkiB11U$QͣC 0T }s1DC.!;s1J@h:Gl@d@ `$co)0s D]SFBY1NRt!}aXGUT: H[]]Nݪ.g-kօQT>gЃYNWy1W:>XB@!p0O،U81` sXwmWA~!` 0AqP搊H=9ڤDjD#m$vvun V"5[Ƞ 0(״~BwvӋq)b֛䤡bLbb󢐴t=/}U8`āLovѲf&t/yW nTA]-꾬.0ACka&a-A7 /ፍPMJ)h+>$ B+X(ITГ22!`(`hx L}辬$kIu u<o o8XTuJrA#+PZDbYьmC5YK1Z1ة`U0tׯ)]@,M$C.2*c+tm i,!zԥ)G R{ȋ9(P"Q _C݅Zc=$pmSQA>ᡔ5=ZR([|Pm!ڃjF,F,-;#hﳱyp ( g@ @$B2Б5;5vwEQLMF"h4$?<(Bjw7qVUp*XT*Te/@IY6H!(VDUb|kMJZ!=hϢ;i$kuT$1A[.!loNN44z63 Z ΝDiVۆnMEѵT}I(ΓD꓇FCoׄ bL@Oz~{4l}˃ʌ' <%2`ж+9]PnDVCqnQD{}I,WR SAA ,3 R4Č,mAz |"2PʽƮXBď]/D4cA$\s; 9,RAy/aèIP(EeQAR܌EEH,zށ U l?W&Eׅiq#f0YJFDyStE8 8%mCɵ+F(Э_wF< \T"m4fSMhpUj4D X%d,}zj8l-)FRnAƣM%M?M1Mj-'ˏ17zi1/8ӻj1_qhso_K3-}hדǣ_CChݨ?CNU[gjmϤ=2I?VrlpUXV7$'9Ǹ6Z tN b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v,#͐@1:h}8y'PF~9:3 N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@R870'5ԝ@YvHK; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@eA94ph=y'PNst٨e'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v|@Z)٭^8zRW |.zofg@0$G7S;FtmɱqK]mܮRh~# q]dA̎GleU&YYn ;z,_}7_[*hE"@\hQO?FFiif9*V>ZY_WsjN} ?ofN% / oսz3E_ /jeϺޙsZ-^'oS޵s0B>=ɣ1އ!P㖅rc[|"IMw3VJg ~ޒ.kC}bU砐k!?k80<=fdi^ѹ3)|ߑ!zWgx@O\;'tZcN]JXޞ.Z-DW'a0tNWRj3+Vr@tFƓPZw=:Gx{z5"_}ϧ+2]!]ܠ)  b(tAS+B<9Nܐ#y7`$!PR_#\`sG}Utc{ZjEWf2LWOz 8]\BW6S+IwG4LWgHW*Jŀ +CWX~(Odzшu< 8]\7"ABY]']mL'gHc.FNm$9_C1_̧Ā;ԭc 6ccS\.l@9 1ؾqפ;SDͱ[xjk,p ki-eMϊ^c,GOM,9#ZޚmΆ!Id3HP$2=PntY"D[!]/qB(3pC+uJ:] %KW, ]\eBW֜<]JΐJá+Ǿ~h8uEsHϪ+CWtg'%!z{! ք\ 6#:1ue+tԡtYXk9ZC+Bk JQiDW8G]nr(tEh;u"3]!]i v?* pA_ Jtutt +5Xf]քǷM]ܫD~~u3S|>Cy@ax@8TVS񳔷V+凴x7kcezr(ÐԕN1\cP hf e8\2]]y+o9k`uPN` RՀ "> hhVLWgHWDW,p nR;Ъӧ+B"_#]+;^9 kԑ~hc<]ҞڕۃS^Z]`CWϓmNW2j3+er@t/<]z(tEhuPFtuttF p+{l^h|",*pzh#!tfvnz@Cy^[S<iq+cGw { tӅ(Z l-?:hG7WwWS]w'q[(~MWiY}W|-2xufRWЯ.B*vFE05r6O_ϴMc:H<7<㳛~+6@~'f?/m^#yͼ`͙ź 瞀 gNzxDV?ѪNp}!TO3ԷH/fz;h?~uk߮NԶ9Mt奏!TTETBj.*Qz]|K>'r&}$a{O(iѺ>\U\]R֍cM4^eWb.}&Rkk!Mtg8*U% fGCAAkbLo(tR>nw18^"6n-ִ zE܅%߾wqfHbn?կ}/X{Oޤy~ ɤHM+)w/Cb]o;j6͕dCԲw\Z.!* Jo]AK6sE\ z/Skt몍ǨGMN!m6ekT?\9ϕgsEѢVR^OVؾH#mY3M|V@wG*ލZa~} WHև\dScf|$= ;U}J}TVSP>y^ X89kmN[T(/E}B]H'UblWP*!`v ͱV*\[WP7Ȩ. )^ʯߵ=;"~֯x}w5*B?}zj A2u׍^0 gi^Knu<]"*i:U_Z~7z2I?VrlXt"spCbNhuP:b~WioPW/rȝһ\-SEH̾;-vt Aw&UGZV E6\ߧ=&_K/#.dHnϻ/zgzyի sZ!(N>LRRVMc(L(IAW.(6 EUom=*HכcEfCW9?{F+9}@?+Pwg8ؖ+)Λʒc;֏8-; ĉ]qgwg>92:\v3g{BG$%[yBXR8̫9/?aT.s!Wٜr~¿7g-|*8;HGnbvy:3R/W6(iVո|kkc{ZjƵooEvEy"4v}Lֳ R1HQ,J#w#cTD2\ =L &Zm\&kl72v3 n\vbaMaEޖ3cy(uLcݼ䩸qQ8.N>b]g]JnHcq֚b8x)dJQI;pcnq,xOgۄZ|Y,bG |G`͆ n(V $ُ{ ?bp-n;]Y&dǶeAV7 kn ϷmOGٴmb/ZM=g.A jkJ:TKXVלMIA̕. hD RY FG,)! jHѰEyR1N|7sH2LtYuJ\&>Χ[0ۜDd3}SJҫhJovE*z(h PôB0"³ B0~`0!RQΪ,>Iؘ1)Ff TIX8φ_bGΖɃ ^B<ikV!gnf=}"9:/23ox(; u +' _h=R9B4cb*%8O͗ge3"{CO* !9TYC[JDV qmőAF37?0jy 8&o>c50I): ^DeDZp J01EQx4A bK?zFC~N|^}r`X#/9#k듏G!>:\ 0K !ƟRG{WGA^G!nI'r/2:%,_N*d|.oi-"" ZZnY;A˪KC.sL"Fkrvuծr{~'P?t<hv͇'ruNgwerS~Dk=YLr<_LTSEK9ܬzVmwέ}^eok(="@N֞//bKAnfSmmcsKxָ=yl mzvݖvcO'^~{z{,\}?vËǸb IyKfy'iMK;ƷAߩ }xy'7v80DJ5l7?}o*ٖC5ig>]Y,kG4 "~Լcq3gҷIjEeﮊFet9].^+owoEk9/zu9r<ߝnj>^z}3O){ju6&Âwb17no b6}F/j}Qߌ11JIc)D㺯,-FpgW@kF,$bQ9Ñ6;թkٲ}z/O-kDBSRCˎ3kzZsǶ]L4%EҕAlZT|vBq*pBJ&-YGC3f{ 6Gu/*oKoNi>eN ZUy4OsVxsg.m[rChcZxAcfRE\u; IahE&`*ZbGSq6)bajDZ+'f\ys+&$ǤbeYlm|hiAA r7bt!zlTNsu38[wf1Rm_ ZCR,)*, N"X&U6'[|1z{5Pӯ ,R({g#=`PT'ˆ1@\2@UGՍj_:uqOJ~#XPc̱zCT Kdezlp>qTS{|S{NAy DS}Ɇ%Jq)TR *`L%ll TlA땊q~#"bۧ9-D^X<5:ϫf^ .!B[Y(Eg\ .{tZFK%*:G%cŦԽ<ё:GGA?-ZJ|~rʰ:6K $ߝ"-d0&R.@dr^Hoz~= ﻚ(B2붖 lLBEXx-~<Ě'ҿ}Ɓ3"zA#Ȑ0j4J1vdr)]?bZ [c\&2U\jE[GHN1F62w*u7sS[ABKAC楳R g6[lN=)+QLѦS̋LgŬn>ybv9j;O/쓍0϶`I YcqBđ~sJ?RFWmfcJ'kVƖ!TY! w=y]O l,qŋv؈cO^{Tiy;"׋p^j/'a)EKVMAeEU,tp F^:W2MVIkR֙bsk$:Gވ-T﹑nGy;l nES]1*WR6brfPWZvlTʺZ4J' QB>\9Z oi>($d1ZjDڀ2%.lu&*5MUr%rW*\H kbюi|"+9aٺ"|'9lSc*Ն޹iw޵6ۿҘ/6`2Y" ,y%3E-v^Z-KpWj6Y.:>u!ŦK0vGw1`LZ`2LALƫ{«4ƻ Y)mckb;C(%CRFR M:2l R@qZ[%z{}䲸TءgſVKS{.} ~4<л5/@/h{´LXv2/ж:F]#]豇R& 1A}0A*1j_g{Q =^=:fh֟v|fke &̾L0 yz Ov$ _*kO7~}SLE޾`o#tE" Kع?a]=8ݎ%?~-$i.kFhSt,u'ϺOSfen\gEUZ1)W760}QӫMu܎z=?XԻ#H}xnF%szR6LTw0/fe>cCX8#?\"*&V|STVk-x1y_v!0:mPe}s(ݮJ; T cbh|Aoǀd⳨<)TD2'Tfi%10"<3!׎Ji`{^m_:"3"ŘQC sy WpS!{Q1X?yiziDjPQȦ$:41LP90 ǚ -]H!W$nSR3qD9>%WX+,0{$ണPD|U0?ND8ߞo&\"O|qg_äwoޮ3Mt~vՕbŸ0m,y{NS3gůX\7y}˞R {AT07LE-_H~pa :RL݉õߟ'0^.kӿJ`hUtb !jqh&~jhpq4( *_Ӭ64>@}6N}qcI*QOM},tv6%l* c$HNVi(Gr&F[X'.a9lL~*)]ӎ|\TW>TN.//h sm߻y.8Gp{7~eqkKn麭 AEeևS/U|8O;z?1Qwz`, '"g˳`=Yd*GM*SUYǕ[p]^+qEX2Ke$#V:QzXO91)G:V&&V`bQHx3jc:FY^1TBPiC_KS _/C1u۾& /à@ė 5>D'm~7\ -0,95C3fPu o_fRivvismk!׊6Nx$ TbxK(]BS.zxC2eެ*%,mݾuy͕7av[ԼPr>$Wϟyv-[7T$ќ38aלE=kZ(S.ywLxm󤱋[dK:$ 0$Mr|,Ib M Rso0IT1Թ+qx7XMzoL{)c+!ػ)]IH<ҕLsأ%țpȰ/NsF2OS$W2b= k9#@R^r<Mw q:wFzgqُqC  nz)"Nn`ZyϱL}ݒM4%1f$YDRd2xč X54!b,YyFOq\L}0(#K9 Q Ƒqgl3դhP+it,`Jԧxœ6+SQoؑ6ermgdW~\m qx& $dbL gb0db$dbL A21H& $dbL A21H& $=L A21HyɰЙe#;,e#ld̲Y62kzLE2FFze#lϙe#ld A; .P酤}~ؿ PJe}Dĥ55j.9>pd@!XkFC`EKgudT)ʭTieQi%ÚAR1G EMGudXDa -J)~u6ټ 3t4C,W]u\i/u_\sօH_ j]#C=ALɵGTI8?I\Ɏ%)H0MRS~)LHِnnRGFO|䔏Ad]$c9P05d(Vrwثl-gk9[ˇj-[)c f(QYD$4 6!9N`<[9%/!?v%HʪZ&Nq4˺Vi7s-e/UAG *r1kep,y* r"M#8"`(3 %wFm6*L0XJBGpgl+q+lj0)*];]zV5yO]a{ri&ܲ>S<])t`Bʝґ1^2qi)sˣilBvoBr|&$GfBr|8&$? ^ d1'aJ%63N*ਥuJш JHN(`aWDZM1wa|`YB7zfL5a<ع @y 'GOF.,<σ3 >֧Z 7{i;棻al0oȔ `۷S~HĽջ24B`edHRۘ܈rTG [_g.NbmO=o`(\=Dmjg@8{?:3勋mFZ{BZ3v,[)\*KXDJE1ga9,P) 5Qj#c- Na >Ƞ);282v C0!B*C0k5f,`ZFL&Z'%㘆ҽ۸@^AH`2R ( zX=J7ݳ#U9[[]-*k Ou%QՙcnYzx3 8vismk!׊6N$U$ Tbx(%.ttY^_=t`Y ̺=7 d %F[v]>y^xs-j^(^Rd{{}ӫϼq;nA 7^Ql]x5|13՜>趼42rߺ䈮80Yܲm'\hBϛ;,SXA(%^ +eJ.6Ch )Ki8j^fClxhX :ȮZOe'@X$m0s3B VzP1Ow?{WG\OI"Eh"& ->Ȣ-X,bk$KZ4(i$7^ݭn9d}˜/E]#@ޔ~9k>=6Y;/%|?ڋIAdZfbͺCT3M[r޵E<ԉmBUoLF A> #l#[R20sv ֵObuw\*wPk7]8~Ѯ+ V^,BLJ:G`T:RGvkCaD/]5gc m+hQZS,M5*= mD[Da c-*-C=;[{x TYWѸr!CMѕX[!U[F9cCIZʹTsw{]%rL.&dKU"D`CG{%ݢ(vr63vK~:͙U[l[ܲO]jTVtp[ȾV @Ą"!q \'1,bLucG3eb缄*STu}դ>W*)))1r1dw.C+59]-{[ižc"c q+ZRɦyKބ"&H\,\L)5=dXvŖsl^s@eje|skŸZ%8>[#e6*ʨ3()I5UP"/D]ǶǖR^2kudJݿXǶF-A݋r~پEx;7` FW r ̆{UXɭKp{{H_[ύo6>̆ Ȳཱ`R#[ 3XRqjZ_S&&f 4S: _/8֑yA-߽ eEYJ˃'F}Q#lSVNO2`Y-"ηE/P*{\0~:|-^ܢ-^܊Lڀl9t! gk$>S}MΊcˆR_' -U|-r{{'oJP2P웶6ۤN[k[iA؃ -:T°$CU FNT%CEI"8c 1*u9w;]N,^ik.kˤTIɕrxbxw?Utp4U[SnkUO^?nPrSMO($`^1Db UcY& >0Ib1Y'YP[(CR5Qjek=f5Fal6 a#Xf,|QXxΌפԝܰ}?`?=}+'Fl Dr+-Xe%UE8/bR\Azw>@UűTcb/W0tTTr!T۩mI H3%݈Q܍|xP/ImAac[P;ͨ=3F((ޚKQg'V z@6"h۠s[q=:WXzb -.NzES:)b\`<fx8ۂ=rx߾3"~/8z:s٤Dq@inlJP2:,58!"L֘Qxt6u|$JQ% ]bѩ+̹?}լx|^.u:" E;⌋x2:ZV/֚0\?* J!Yoϸ-x8ltl  l`M j+7G?>SYHu`Wxu'Xp3qr,A:pع0 S^|_S`J2V$\W|N\F К*6f36;iqsCB:LWĮ=Q&6(M}]?j)LdKG\)]F)H%[`j63Jx5fη:\bK+$e.R*2_=2#>ZtYYDK<[LN}CuұP1Kl @RnJN{mC_ݽP q5x%aK.^f{%Yj-33-`"MOPuқ,,]I) Rnϙ5[k楷Wڢ%@!-DRx1YwՑ,jcW6ĈhԐ}88'8"&k#{d!:Rf[$"t~kjrI<@mк'4_A}Rқ$jd [Ӓ-ܘ;lO#Ԇ\3W%cGS>.??`k o{f=\f-Z0+} %'e8e(N~6W̹>M?9fUp>S.x,yB==w.Pڳ9SCx,?zz,uذB/g/v>̯S _ٝl\˃3}7/?'~\vގ^{}JHiGGgY/L=Z*./wV,__}rVvcM'm@ hhBk^p@6̭;Ҹ,>Uԍ5. =YUW1Puޖ>Bθ+F\,숬-R2IͶ9?N&ւQɳU teJOqYn3NO\h*K rMU(bUZIO3u1sr9ky4vtxtq<I_d#n"$_gr4%GCEkaҩŞǏg_JKM\B-dF6r*ⱝ/lBSVBoLl1:k{C %!l=J-ZF议̹)&.{k E7;w^yCqxz0ц p>kJ@Oit Wq%F*ZSY Yyg\lʳq{{%fm&Ѱ&*%wl<`T rnෳy&}ot/.-f{g|֘E׬zƷg'k%&%EW8+k1ƿS`{RnML)rjEQ.J}O;v CƓ9$y ,605%ZNYXݭV˲ԲDFjd,OA> &[h6wt048{Ap17edP ^i~_ `9Ak l'cw>ȵE 8[o> [ pRtf#Xl%#ugz+;iƃ_ـeѹV0NPRIS8 9і+YF^?O*`npq "{J0 7죯"];>jhn`WL#S\: A@SZmzn}RլUu}~㔆bb PvBmf˾fgxx5th/fw#pTKup(~7嘚dj=M 0hcb1LTQHftܭhb L;TD^fS/fgv}xT D9EF$P1G.d;Ib1 ,Ft4/+:/:#0c^a)f0:XIW Ƭ&N<ƣkE% t2߫Ɔ{Yy7=lY/{Yz\TV~O:ފ~[ $?0HIН9)|\{ ߄d` _fUBXуW ]E\}W ˹,Gq+ 9qEFHJ*aQ\}Jkg;'E'IMy,;3~K÷!#,C<+h0z TsLr2qkF1X+ӷ77Jߎr$0}o_ gmw@*O@9 A4hi#\Oy&1? :@_& y`0͖H}RSaTeVV]Yq-k-5?L w>!ۨy IkI.4Qȴ[-'⸜,Yo &uD9hkcNMLcʘB#bY0)Jvzv͝k>6> z@хwaXowYˣ-O)&U "Uud}w )J,N ҉:vb_~HE]dPk5j( Hn{qt]? vY67 d'͐(UQ)92?'TtߊJjiT҂ywRrz0Ho +V'aۛ;g4DRgzYvBv08G~kq}av񪓽duޅeyi5H+6[o}='yޗ}0!` ]I!hBCqHpٞ͟Z ~8= ]E\}W K*WB+)&!|@UBWCW-GlU’{EqR5S#R#$]\%,竔7$3 .KO}Y ͨHd KOR׳QÁ/Nb/9X'ô mAx7f<K׷?:9q2U%Z%\W 4)-3< <4]Pd i)Jj E5 [EHQ9"TBoC/KN!gda ϫ)( &tsĉƘ3i1əNY(Ae,:u~e%b`bҹVpb#Vu,aMIf J?3KO_HXjآ$WCue6} ZGK!`3f4j|sWbTl7^|j{T.8-v]_KRp>_~6v3/>;Y6Lr)ZsVd}YF{Z)EaÃI'_LXs =j$Ŷ}}$<_x-=)IڬHZW]c?^|뗊!Tp4r˻̱|<b cg-v"kY"MGM*Z }?tA~@A CX1F: lp6HF7g)sQ bGnK}\3 ?NF[Qz!#AU* rR 6h5ƣ&ϙ8ZmT[pGiFvըKy*:k|^rƖu/K"ȀB FC`Gyur+FZhusEi7I%Mr)QmGCQjEآ#aREbo"D45t:__AqJ W$.t)ήup1p4 J||UR1QsYR1o-W{7bY 3P<5*g0SVrwثyT?羪b(W|νH9=~4C njVۿn'F T{'\Qos(ZO8IxT[w3wļ{_Do]h lX)mNt`^^.|g5F՚7B*FvFڠc;L,c05%a4UTsc%bԤӜvF6x$9prs ڽ2]2pq`y &FTCʂ~mmethJI層7Kc6_-dN[z.Wmy {+.ePd.$7]=:#ؾmKk5T'ns=.PQH$$c-FEǖ !X5[iAs=$\`ylB O^J;f2Rʃp-A^jF|:k8ts_¤rR *:tSY8zX,^i=ٰ, -l:Ss:d7mcdZ3Fe!km P ̩}>DA8lݦ0vi( HycRɜRafXTqXG@QAywcIPl1bLsy&]t̓#^ ZdM6*=Hga#c`T-B6YAuHe*&>Tq+ ǿ8x{o\?_yuqŻ7qp)\# rk~ F@A?E[]CbktY'|~UCn!p6cҘ[2nw烛aʌXЦsٮ9+7+6Mu-2W(>VўUyg1@S9Ҭ<p_o *7$c\Ú^i=F»*q) eRkO\rFTH5f!zĤ$[N6,L եΆKkkU|ls axSl~K-C."QLg=.v%6=M,^2'Ab'm+nzwXgxo)pZu2|(Cƹһ(UI([A!ɡ2u<9T*Xn_SQЉ(ÂHGW)k8?dД-K%`A.&x$ )zzgZFL&Z ihYGl 5m:+HJHQ6[HvS$U[u i@:1JK>ޅ=Kv)JԊHTzIj *:|t 9Ρu9:Z/:mHHW׮ ̡e^S`WMȳƏyjzHYݚMƼq=,Pz Y G4'ß4]!,~}뱍7zM ̨ 8a.ݭBZҪT13. "Zw^LUp5/ wݧ>wi ~s!k De:_}^5.E [$%;~kmJC.{<,p׮|5ddOϺ3|cnLlF]'sv(h:oz'|ߐ)wҪ~s /a\2ҝ m \L*q]1RGcZ^߾?#hU쾘dŽ;bI+ F2\95XU}A1ot S'ג+V *(T'J\Ag7ֽ~Nl9ͮ _n#ssCȑg0?lyQ[ХքO I}ԇ~p.AO^" %\z_emp4%JD2}EdZJ$oZU9uo{rl'5`_B쫪QW GnXqk*nQKU܊$M> )+cd4YRWShC[6@6u[ ݒ nyϨKCiHqQV{zjjc0t7F[~TTGUt('y@/Mc`D+4ٜ$( 40pΎdGf펪oAS&#܆=kl\d%3jll6y ~C鳊ME?ap7HTbRQsD2TVsyրSLL"cE:Xm AQ8K/ RUwY%f#4snGy\eN²`K? f![7}>j{Gowi GlAyMjo=yc%.٤X Z CU!MN-!' Er7mg3,ݵLOgs;b\ۂFǶvYP{a70Ѣw[3g'^!GQVw"%tPk/NԣIMr4fTt kD]K@ :޻ңfZsp9Zԟ-8m|- M#>q&m9+#|)*:g, 4Sw%ic|) aFIA5SҀ9<6s]=x9#]G͆Gg)9ma2.\iېIJS/QCyhdl٤"@kۂFǶ!&|/lMIpKMx/րRH㫓އARi9UPV][GBft(ac& –' Bں$EIaR? JѿNsUaM:8^ j[17((\ D^wVKȅ&uQ%rN$v)P*'ia^6y(#.S"@G瘈䒹NhQҊkN+ d񔜹9Liqy䷛FB]מGSW6EЪKʁo͆5 jSD.5u5f620A !j(*Gr:6~K+W'm>1}7DFN;=DmT 7ZsHء4[R}n^q6װ~p|xpi€кݓWVFaUF% 8;Gm4|-EbVW-zĘJcM40b#r66taۣ $?zHd+w#6Ks.c0 Ugs G&:6`kCVd*r.%} Qf(w9_ KhbK[j[yI{Xa%fcI:l^1ec e1X HG$3]K@}~5$WAMm"h'؎nH܂* LZ7iZӨV;ۦmWQaAsU29lPm r:N4sH?ߚ2pĶ-f=΀VVjhٍ2V "!Qz>˜̼xۼPퟏ+{[SK|Mpu$B" u ǒruL\MA^sI 08&=9۪#F}crյ?սEY=T5 (g K%P(MO6ڻHm(o )!ר8ZcV+ F8"BƼ&//ȑ{%_IG>;јRs" ѷ 1SuRP̓ f[F=~%9d:_}^5.E3a딤DY*_Ϳx[YN]~{"fP:_.\{~ڿBn(y6:m-+zʧ_ AN ;z"vT(hCy}ȑuE.9Vw핖( 9ZӒ#I0fiL6S86brb[녲-I2ӔS,3ܝx7@ErF0jwFBGTizE_L&G̹&/pY;kpΣ7Z,_59wo**J.|Q;hqdMVQjȵR⼧_  .^rK$wo%p.K*~蛧7xo[fZMCO@?cAeȸ dwW?ۑ?ףVC~~9 c(169.z=NĢs+2bB}ŕmhdb-m/F|N;;|_#ѩ7?gx6g}>mk<П=xH*B.0*d_{)ҥtTN\H^j} 4@ )a`R (׃RPTԓUef+srm+HI߳ݙgݜ&^"n^g3zz 5OJcFۢu`c;"( ql6c,oߞv, 0|/Zj"ȩ L=^}@¹GMbҠ/q)q}^zlX<P~<2&v?mvs|";:ol՘\ܽ/^IZk1$Ɏ7J)m[ (h9spx+ ly^drd`B-=g]). zihiU1BG^(t1\FOI| &uj]ÜLteF& wm=n48v~1xIֈwKvbUkl!-uiGja=(X,ů4fĺY9:l|,lןdei. {х]D=(ڳ #{ %7ph3G12[d08p 0(w1Vcyooޮ ! A|g40 2qWޕz#{癟nN_B{+:ף%cza/蚢 ܺIsTӠ|r*RsA@OFYmjRfPuSGrB D:u!v6JߤoL@Om;Mvh`M2tP_پR>Lwb.sjvzj0Qp 6E,g H,}{E &nIS:l,}ּ tDoB v<<_mq βM~7O#1E0nLB*._{h6,vx3 xXm~k$a?7T-6zڸ(Df.q5`m0G"# F@%j{]0+ RxO A8aP[Ñ{?4 hΑNiePettP< N&$lg+u"8lsF!Z[s0##sYu`{g'*.3zŎ“U' Kv(G)Ue^{zHJ 0qЄMޛh"T^e%v_4|9n:LҦ=N"P@M(X3akR!@s%èx~.//NM~}rMl[KSf+m%3my2[#|-7siM]û7Yy2pMx,f&0q٠&ce A |D/` K$ǂ_"_ Xx7bZhm%V੃ 1MtUF=TAsb52"m܉|H3㫹ok&X I\ k?A J49$4ߪqu֪dZdײ/gB_=҃c8%Y0cKW8a͍3@dV"S[6m]1W`MkU RAUShPKQS!N"h0U7QzĐ|:|mڌ5 o$ b4_9Қf?v\Jojm'DQf\f'{ѻXEew\&LFNmU0Ɵw9eHpjB–3d?8GӖ}OyhDl-4H!ƴEh/߹MQ,w]e,scii3RvEr۝5AF9֙Md;w)y:>nӉ3yPrXMbd/bٖ-ѕmD̎El:ϒ.)ҦN t{#'肥+&~vN$꾀?ME="NђqC4DS'6]tqpw$٤qHӄYY'7fxߐz^$cH (7 ~4\Kn@S Ty xXJFFU-^G弪M2rRHh f~d&inGohoPp%A CJ#>5ȅ@>5EtST K|AZPCGQ e.lk\[T S6rw>/V;^EiF}گaeNBYPbi3Y^vTn1W|CYBz^.y9_V{( %ɣS5c1#$ZOTdJbBXUu*RIY]=Au1U$?@\}IvVWOG] ^Tuʻa`nQ,×XH+e &tBr̉>JUC%5FAw{2 wqdeO8YX.H}ߤh$/ٳ} ($R< .ӫCK+ W/@ⵣ-9QF9J[gb;+k=v|{qK00h?_\*||0'sj/ X"*"ϩ^Ck6"^>ꃙw5P-kv;oJjzF0Sz0QM+gIGgYaMͺm>bӆzfg눿K?x3 AKM1Mb]LИ/* o/bľ)ɝL*K:&Ϲd22g)GZ&LXL(w;ϤX"ǐ`/ʒ`v YV*8@Hk -e h 1^0+:CwFNB+ J~l4鈘뜬Wy%Fn[SMO3 F5;3ׁ BdzuƑ_sX0¹H _NR{654ePnhJPc 5KE0kQh6\ϡzA'Nbo(D,8W΋TroMS| T+k8 *D(u^pdsiLUJl0u@Λ[Yo߈CS&9 %LM\' v;8KV쒗=7_ǿ087m.g H? ^"8&1A Yh( :kh(˕bgT6 c%b8L @bK!C) +@q RSH$6>Y?\Xcfẁ0}4CئΩhoU#E@2z{6))d_AoEZKK,r b Rb&u|S43bEcV뀾@S g?Mۧ'YϙӚs<((w(ٷ/.>/ fR|gm*ml{7oz'y::)EzVj HjĈSq_=k7Tiqd0͑Iuu EU0_{h6,vx3 xXuk!ѪI2~à&[T-s,6z(D-͢ՀNZ,6E\7^X>hm?x[L1F`%rBqhà#L !iuIᷗQfCYO@ 2q ܨU]-ך|F&$lg+ds"Ri*3Xp9:QzQS@Yĵ3XַWkOHou!q}U: O^WoTQ{l84oGv"Kzn(ETup3W9EeE ^y }(e"l|9!a16Z8:1Ї #EK3IWkLT![`Il$ɰ v,O H‚Nu+'TۏC XLx`ZHCJhc!*y8z+%\Dh̽HIpI!>zo" ǯ] \!I {+BA=p KF),,ٻ6r\Wu U9U;/;gS)^mؒǒxue٦өȶf@(A$]X 9[2epcѡim:7q# [Or{Zrvm`hܕTN焀;s$}QT)0[7cI#q,nx  I!,mJjdee9FΎro@V[dZ`@DcL B8O,DZBU)!D\"PS /=?f8Ap\ oF;fEŤ"CԪO;ieF ;zq)٬ Gs$^'釅ѫ#Bxo{/#`NhbBHbA9BA3f󈤒(5L*=Y_ W8J3yd)a*rAKeECg@V> X6dὡ00kdFЖV V[JΚƓgq3[Ĭ1qGGq+luy)eȒͧ>.O'ML2TgqmE&9n*hIvḿF zr|9sמ`=n;l֢bW ,K(꟔mbZnOƳb#&?¼ j(x8*`B Lw[Rh0` 2JN^3ϕKdDn2A%M*kefF!}*%,,Z"GcU Tѡ@EOv4*RGV#q\TL!0ܸ{a|'PbS}0v.c+5uػv?db U{OD${ND4r%7Ө{JD4*W"*|B"*QѣWgty]xaPrZA7f}Fa h`e X V&E5\x~:RލaQAl=Rݐz\u@KNϔwGaYk[[2s!S%ju}|IS{ qKL@D*pPZy@ST@?z |Jrsz}ϣ1lŲ=y\̼NM>"#d}[)n&s16VCL3ݐ_0*1O%gCU:jAjA&g6C^jg3P#t2^:֫P\y'NXk13ZG3 BL3dY{sғpb&s,頳4#ULo:}ЄA{Z|u-u $Q20`dZù!HDMZ"g{ASqQ2IZ8f .oB̊S"2d R@0Ox,zN Zb]N  u/V2Yf& Q̗BPzK Hx-)фTB Hu9)FYk;zEbdIBD! ;3oi}|8PjFwkƻXWg.z7[zO4xq+b?a5{\y7ᅫN$ғFK?Ni5u-#fKrY\x?$#&MtL F3qK@qPK&.щRrllN Q:fymjW倞X.1^M55m>ɧAi2/s?SS9/ws]vi[ϫwUݻG@&f: ;!W{kȒ'q~"=qJ#]~Q;l=Z4/5zۺs{tM`-1Q51x'x;Αd:jz?',M us$Pt5F0=&JBdڮO4eNnV=s|>wߜ` }MvqtzYq_t}5Kietu_t mTOsx< Sb?r?O~~>{8@Jᨓb"@oۇ_054ZZah5z9ỌkkrǸߊ[iqXV8Ix߽lx[Me t(Bd/BƳQXvRTi{8'&q VJe{l @xp ၈:uo9=g{K>&H!:E2epȓ b"Ι+2Z@ƵHO;2wP!\;MۃmazuUާQ٤F_Nd4vEE~!S AFߦIn>}U染ϓI);[{ZصcuQnt;s!&4ϣr rsJ-f\B\J7cwԳԋcw2) 7J //j=hK)Ew4 `p*6=w 7r.qG gA5luD)wս=?thXr$Sm;gݶRRqf{T$'i|(j,p̬Rnݠ pԛemV5]}gl7{+~vvy`kSHkOHKh!mgM8R`ZiA#MCcSN4LJce4QJW ZJL d䨭3Aje NqNoSޘRb䪱l&%VYXfĥy qYh29rCiㅮVc4y@٪uDOƥ,ե.nvq:[={{}IO;ң)>lYR>R b7vODn%mVyͅ+@h{ץvvDd+:7Ժ͢i:'G:]y$oͶCjPfɭwmz/9FPOu}ww$vx6'⫏tLv[c6zˁ5knt׃wKmmkRˋ(v prl=沤˫nO'|Lo[[n.$50Bc>k]c3D$07T3Fr C wgc}Ng=F#Kѕ48!&RqbVSqspa(< Oo~fzrOY=CY>zӁxH2G`\:bI0)2|TFdv2)iiURi"%0@NHH ύ R RdQYQսa.ұ)<>FXtrݔ,d]Zt`t:-ӻ6Kkf lWdߵK )XP j{BCP qulfE\+ݰd`6! l EFnT#GeO .,Ҁ {"Hr_;,VɾuhFI;Ӳz$.V3]ݬzY|'ynlA YYg Я-9&WoĹ&( vԤ:A9o"$m!^U j.90JpFpd fgBɨFNG3zL&/\/XU b(WLZ+i \4uA::E2v=HN0d=lXSZ*9B{:%Tl[ߤIRr;uUA,B;q z,9Tʝ1 tMMPP9ZHz0y_2gk% Ĭy]XclB16/We~;^te݃:oa[עً'h<:__){s<`f#: "sk)8Hkh.7kp'LلEYc@YH|u*; 0LJkabEbD7P6Fd!iTc"W.T-IIEsb,?ȖAױkQF3~:ut ccBӧ:^%˽pn;y"D< @Wo8]J)/>~ȱU%H:b`S(Pg?/S'`Iǝ M$VvQ84(U\"])䚐 ByHpwu@*'t,xͩNۤ KZkME|Ph3P%VyIH>ʨZcۛWYS1iݨ[.J 65&Gfpi@͜2_xŸo]Laêd 6YVMGd<߯zrP)rh`q:/p.,+ArT,$f.UVYII&?VnP&ؿξL?V<,Ӑ&L  isгo,oSsY} J g@gwh[y>J^^.>[9:kH}dJb? .gg˾{08yk}զK;_=+?@HiP(\\Ma\lk@i e4?7G:Zb_aZǡM >:4PsB"ޯe7ݡeziY`T';vA]jȲ?d\ vpm3tO^C^[א~֐Wb0;j6kLtzpe : U3ךCf<\ +lpf:dYK8ujVW/n츔Zլ|RN. >jRWoj`Yڶ >*6OiUU%S77>-WhyE#CV9m 9:knE3KB){>ENstWkj680~b=Z/G_W*ڪtyyX?ZpV a1[j˾_]|0!Ljr=\qHbx0k$fZЬ/fW]]kvuͮ5fW]Q8]]kv,]]kvuͮQkvuͮ5;"vuͮ5fŮ5fW]]knE _m9>\U[| HA<8(F N H'^$vrEzBEz"o;C?CVNL N4W L0Ĺ&(SF`(䛥{2XUɇl{>W)+( 5 %f&O/$i:;f_ h$y,8eFw}~*6a\Zs!IcPڸplvXZx'bc]t.IɔG1PYR"dM,F:J?J1֦\x3{UP9VF35_S)Ӊ映v%G^kU`TCD B-xo5Mʁa \=g:6]LՐb落rbډk’C5Yq@ӲrZ:VN+ @\6h5DѺ@Ąۚ1+A^#ֶPKjSe~;ʄGWIQml 1ԇh8i_XÁFYC竓6ؽ4|[:}oGlv@y&mCZ yз r>xFYNhl&T,6I2 ɂNeG)W WI~y-LTƭ6Fd!iTc"W.T-IIEsb,?ȖA,ٿz,sS7QhBi5&[Oṵ}|p#lҠ v4+XG2Ҍ<ܑKaRC4B}64Ln&4 }v=9f[TsXK>a7&բ)dv:{w< GTIch]ѺX\WHV$?ie! 4FԒ夊[6˚pٝ$.-}[ƸkrEzk4U$ DsB` $;Zwz~Uv'&pJpx&P%$6y NgjЌ۾ź+GWX"^۔'FWW ;kc9F3qO ֊w8tjc̹$+ QHY 0(F r+^Z ZC`\'@*ڂ/B5$A bd0m$F%ळ ـFGVaTk4.No,P[Q1Y  C)Zr>f h8EUh (w-ݤ3I~堓yQJ1g&@9KDk2X4A)J9pcFDĶy> ГQkEv kߩ#:DۨfeIiE6F^D!SR=< k26)] !Z2L^rO-[%JnCcK6TThl]F%FEq=]!"C&qW%uN Ĝ(Ug7Qq.co19S !'TrkǠBwTʅ0߸HvYDҥgݺGDv=;"OƎhW>?OjH 5*@ߊ7NEc3):+ZhIzǐ1$SM!RbY8ݣ)/!ЖQW>3kkdږC!!Q*9iTHcG͜ɑ`⼭~ @^?m2~uoi'h !f8Љ HƔ$i-i c[BP 1+TCR>%xW$4Y& [ 6/g$Jp`_d9۞o.~VBbi-bI8[v_se>s*=:nOɁnru*s^r]]eJګS9/W/|wzrkͤ'Xb >?۫COY.ϮOKu >;\\/`gW]<'U\{sBZ 3ogkS["!$QW[;d8Zw[-˚X`nVX$dis3۫-e0`)D`tt3_N4N5ͣ SV /BI^']:N6 ݗl*JT 2iGs݅˵O]`d8~aDuؔ>'P+2&rIY2CҊ֓'D:oM}-Enka8c)eҍeAdKRJao= 4͑%杈,45FfL4de%sLD&Jvjj| Oi> /It+//C?1XIU1G`NTS79#a/Y*i]S?wxcq0:%<=ŀw2<}(.e?ԫ\* ?㒤8mQ_Om-v2S=YҥQU)4la4ԉln6p6ԙ; 0Md{5պ?Km}w>0׈>VS YPVܴ2Z)?Nݙ-xZ25̓\Cg u!-i)@)ܸ,m2x϶,g|bG\qm{hd8fWOz/> ybݖ'LnשeMRS͇= Mq mX <2?\f=-5CjX]O<=q;> Q>_ #;RBDы.:ktfoRpyYTuGd{p&pK2! >-r9+gXlPgEYZ ++HX ߀9ly$F$q[ K3(9y{])T̹Sz(!W&o؋-Yzٲ]xazQo}([ZQݛ -u*p!Ӛ8]!P9I HNtkjxRJi&H3LY` a!ɒ Avdz%QxDENĘv>Gl?]J(;I$մ-Ѳmh-Cs-.0'h _׈gQK2{UVVK1d:~IطmVu^Jx=M p]{'0}#tـ^æoiLB" zc싟HT[Gik=G}7ɻ`i`^ץ?7Mz`^m#O-ʹ9C-e@6O3M*gR=u6u,BEZ6V2/JXςS3Qg}pxPNnw}J: Ѽ`@zm8eg|\j\ֵ :@֥ȲYe̴E X[cL_Vh3Z#8ijECd.a H4[oI}ʼnusMEygbIJM+.h [?`v}&e>Z/eT<.$[wٝukVaHy4WegJ YV%<0NS\z5$E񚂺lKCxb{xDK<3/Ws)B Du;(AGRUg}02sҗ[u$`'Ue謏PFQR*/[#gfN^>$( hG.w^w ^Ufq|;3b:kc9@ty^a1k; +mV(DBMe}ٺOW |!e]^zN['W./WVy7TpC(ÎgbKw{9n|iqKL`B) /dY0`!SH[68䕳s}'}|bX+uN ΅y3-ʑ9?X`"@]enX1Hɕc_#p܉oQLFP$҈4Od0Ƚ) #m+L%npkk=Aᵬ<Ҧ,`\LGGֺ92,O9$6fK K* 9ڊi%^py;"S.>Z Ahr@mȍ:ka6eFsI"hY$@z6 {3- Ϸ '>(-]sɽ8 Mb~5)AzaRX#NhG%7e6|('Pz^7GOb66H|A:3.ܥ4$imS >z)7S6\Iu>6T^UKϟ^; Yo#,U$`>!(EA@5c‡bOغ3-eRx1htIYcVui;/#ryۍH9iVPs&UzHHy. tJsKo, (N+Hx|huJ!}%ZZF I S"<' "&霹 Ѡe\I @~~qamc{} 6jޓVX'2j.rN;|g꽐Bֺ7TK5gln*6ZG T V@9LT }%[Y UfYy_҉>vU/~ h)(OXd u%( 8Y{1eh 9g1,c >^c^]o_kuZRZ:|=,PI$03mc%QȤ$ yr>%FvsMޡ 1ҰE B P ϵ'w2ٻFrWE޲eE9@ 7!C^r?wؾg =,{>ږl͸9YmIT!Yu$[dg NU]y{MNiҏz":0]A%w}Z+G [ެ~/AbI%42ƤV!!5,Iw55!m5QLR:I1avˉ՟N~lJJˉf:Ü('=tr@^ _k_+}_M$odaWF1Mb.rrL45LEتMB& GY/RR[դtCj:۴W>y NdR1@̕tSidZR &"+u;l8e;8g秫tz<>+ч&K>| nC9\Exktwf& ~˛5Ӄ'?mg;|u?Ychcu{UoS8V(Bu{m/<~2OlIvoQr~ǚ+~de+?(=SIwO}ymQl{?Yu}PGnMwEw];1{W}ro0s[_g(Q#%h(aD-420De(2n `0t4 ]e;]3+T$c_O>a6OZInк0 eWmтք|Jɉ-kꠔM)7gl:t6??ީ VZy}ƃ ukW7WgmWg?6LF*Noe~<6Sw6zậ_/C(b)jlYhk/1އOW|]}Ϸ^ȹza4|w+bA|8 ΏpQ$rGkf/;J|zeH8}+aZ%/J7 ICW.Q誣5w(2yt%yo+송 CWmJTXDWL 3v3wJMz#+/JWn.gE-=d.UH}6qTciƙpF /y3sy׋;BZ3]u ^p(tBntQ%+=F J@tDp-BW=]ejk(7 ]uf߀ Uj#? ]uVBWמ}9]u/tЕޱe]ؽ~p+o!Z_i Pڙ-+{Е^]OP+WWWuzNWS ]!]i#d@teE7Q h=]u:B2&I]u~u^(t骣a#+Vݟf|@K8-[o*@V< jzS,+%v;􎏧+2ZO䙽:YH-<-{pk+f }-#;\GssD>Bl7=tq2zF骣@td0tz9]e ]}kizop;]uf/9F}Hv<7 ]-Ǯ:J(6Ծ7+VllyRrB-[oؙ=\IGɋiW 6#f̎Kh֕ DYOwvQكu<\G3jC *]vfƏQZme 8x;]ҙ}6Qeʉz$CW|pFBW1]}g;2P6pYV˜ܖOzDz2w'??(_oׯ6FoWO[-Rݲ/[޻#y3UTuG?W<;&z_|a"׺||]ųe[P αU1[^2ӛ?&w7:e|c_\z{/9vn8ܶjOS}r[L߿7񅨺{?Tp7Q Dr8z?Y&pBxh/+7);O#{T!GC=m^pW@~ȗZvEmjm})Rщ\{ T)G|$)GK-gl`vc}^#]oZ訋yt \ \}K)Dkto',Ee Z1uAw=$ :Q%$!4!lEAkĔfP7QEߑjlڠԗyA͸\ Km T@@!+^ul*$\KF#A$ZVjBW>y I%31WҨF ȴBL 69IMEz} 9_jʖZ6f. dMU8JR-$L *Rw gƦ#X34fIYVgΪrQ9){/E[Z&LEJOZ)Juw HQ֤ C)@LT OUҚClV~|m,:vIԽ*;]bS$G*m>9\ V@xVS6n ~HVk9GXGm5( E>\\{巄fyTʐh_NBh<)- )ч! ev6 =]°Wx"?VZ;`4qZ Nlx U\۠ݶb]B1k 4_#(*srvp'"$BIZ_0lg7`|^*^.ܪf\zxX:4m|kh!ƫAy@ ~p,}T2z8 $]4 -Um똂;; `~A] a1҃ZR H$10"2b'͵`z `YG>S&H͗tdP&G͐ A@8J[VTBNU~D}*u UU`də0)i:@?%v^oV|wq2Wɷ%}}6B@Fx0a=$`@F / @n@$ KB [2652I#`aZAM9HhB;ڱ,< kaRg$)EFK&g%yNH,Uc72 FtJ3%2@ir9DHdy Z{Ho2"vVphz,2*üOTTH́*BӉl$ot},҃,vM kd Ԁ,.Y6R.>VT鹿,BL in$,y+z`O R mtis;Os㹦e&A7&K8umq+B7]pDL[L[ uwO)h;5Y:z NkHk sQS@r8,b4!R3=þA _v %x7!9#J Zkk$VtQbKh,wS!;G$ D wB!RCFw]%úI|_&bʅs yrٙ)o" j ޽YCzHp \ǠGBEʪ=ޗ'#r Ĝ"KkLXݑZctCT gFDqQ=byPOd:b$ ],}f{ \]>H ƛZ TfQ8LF -! ,gfC: [V2#a3`2v_-FĔ0nlwȒ1q9 d1X0~}&To2 +C»~=S SϚ\zC =I H$,ǞB$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@K![\j zj; TJI$)@k"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""tI e ?$ j:z*%I ~""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "@F~&kr = [D$ @9a"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""tHJ=3z+}t`~rGW{5?]R]=J]=tm]Af7v jj^xj*#:A:{dWMc&zݮJNҮcһ+ݟ5/vc[+r*ǚh@jV^.ZOW[u&UGM d1=rd7fЌ9~c· ?3:gkh&[N>aܼԭV@tT&}:1ϟuLi Eh߹v).0˙]5?|{ssjwnWMTdW'hW0+n>ή Xhjr]o+]5AͪOwK\{l]5BŮZuvTJOѮ<k{cWMM\SkرUSUNvw_94'ogOcW5ij/J<]R]=J]=t蹐{jr]5N]qփ ȮNǮR+)zcWM}ֺc+)7Gv}&WUSUSi$ ڕjƎLgmcZd^l]*2 9Hɾqg^7ߖ r3[=(5wl۔3NJ;|ȵk?jNB)>fdfmXبL+&N1:.a.>jdc{*6R&mr76nubtR쏵+֢?xd_쪩5tHU Ot\;ljh*LؕU#Xo5}t;+`GvԽ57v3*+}{km&ƮpsvTJ*$ؕʡ7gLcdp?w^r{bOxkW<'b=ʐ]=tި׆ jr]5]5Α]] =&'\Կ,ZyUS ڕ4>%M+ŮZݮc1)bW41?bܙmӑwQP{;c:.FϘiZ<4=鎽FY8I_ât{Lm g5Ҏ.sUVX.Vl.U@z^ϻ7%ua.n_^Bа{|ji(߼qO=9TglH:"ESjdL;,*mދ6-٨σu1;ōVTIa4+6l6y`*ZP7:Goa[*\֖füt;:cf`{?Ah3R1"ŜB+U,go+5Pc2x|,fO,͋6.NhEweo݌py~iZϗﰛiնƏX~j2ApCwaǻmۓm{eX\l9fyI gR *M7vp9p¿>:3?;9j6+v<č_B/7 ;ƝvojuvGw|fj&g 3HC5 :&AY} ,D|\tƯ̆ V@$[_HaE⩨Eb5:GkK^sB-ۥn6ZYG0Fpy @5o-7}yR>ɚ˖ <Uư{uO٬lSή/]VҬ[o_h-w2'v6~Z3L/7o{R=q>_+eua9[XrXh]uFl͹ XOI.9msQPU[Fε s3.V9lf< =yapmq"/KZٮ.fw7;xx<}?qBi BMXpD%ih/qSX\*E*fA[:$8 - Y0&bdT4*[᳭V+Woj'|7P\-GwVo( !8KfZJ%RAJEc).jN~r,nPFl48d}9"W-/]ڝиTnMkXC5VU5Q)* 9D%#N^`Zؤ4wTs~oO'={ǘcN!uCҢH$3'Ux$6[%U /vle3` l*]S*EYϺz 8Q>mrPSssh4vYL2FŨZRt܂LxhR*pVP`_x+*Ƀ ' -?QQB AYk(Eu' j!M5o]B [P)Rz&JFGYI'_,Mᘗ׃ kN<1{-!x lJJPajVFBښAf19IyDD".yS\yu.}}Kdߠ I%⢓R|I,8NbffLHY4)o]7Y>u6&c\;(Sf 1/ =t72ۇ[tm5ww3 jԀ:(@&>|pwOo)!rGb6Kmxו9Rdue@gCUcj8&#tB1r+SR%b9 78Cw3=Ej5T-hUMr$E#+&3:&Ύ)A(}7Ph'^Z=Fwxޭv73\מZJ-?#_B[a/dLhhT|&o\K3$b(La/ijO0>`7YSA|p|]tR)Df_lIzruq͈i-o\jr.H>x]nF!glA"9:,oIfgI_']? tnX=F&_E_Of9 _KFb4̡2AƼ*G]Nw h I{gqG_AB/׃?l_"yޛ`8znvD9ˡ[{Ŭ-c,d)/- s3wj[IM%XٽϷtC3/4ezdhy61Vazs9?qLi~gg_U$Z!~%)me}./o#zup1;Y,Za[$ZnWfy=j|=?m~,RRjUhy:9.ֆ4xFO-7Z K䘽&/Kg@N[csE䝨sΌ:F]`ٛj\يS5d3u2z]^~|Ċs=vvƅoL #J--gҔ BЦ7 DQW)6E}*c\C@! k)=u&{e-H+*+CD[[E~BUeCy!T>Gt*El]s0fG P<12Gޭ%zcߵ5= n 7bqe3^k+kKdL i똕ҝ#d0 s )aKϞlNJVBE'ZW 3TwVnQk_BQkdIFV4Bǡhj1ՄtkL}×m+h-lc**ȭk ٨IGD&lMdb; hNے4Z3~;OO_O+i}:dOe W zӯ^MZs^.N^^>wK{1/)0/;='1^fu`^rB[D&Ug8-OlP2V* 9K`U[*༭XzahmV7>zZwl-qEN-kݹޗCX/;ߟ.3ֶ-s Ĝ}QZ&B%6` IVȚW@ ҪG*Qk!:Ib])!'LRT1)I` do(U*unt^ [4 sW;45.au_-zM0 % R6|NPSVYuGk>$SQ|>q7LSKkVFh]:[QYK="D,F!nh\ŋ<"\s,w]V8Y u2FKUAk$3OV`mESѧ*#m>7Q[u'*U%kU hxB/sG:lGOD-jށ wocʦV(j_+/l5-*G|qD\^ᜱ<綺<+fKkRGQ= <|k|R>zs~__>jcq˺LEsPz9[YY~(ogEY5%98 Or~&Q T=/:6_^?É/֟]JV־"TeTºcx|v\#o+zhOX[moqd8][u~IorO'bۓ|vz^?/총16fU7ݸ)*qM ?V7;^>g9p˟y~_w/^~ҽ{ח^F?5G*-{ @<G+;d ]R__,qh5Wk@hL5_!?/g}@՟!N #n%#ͳ@ |m6}ΘH'H.\֍4]KvQhJKG0e9c/6\"zŸyV7TY"RT-C*eeMr\nKio`w.=eǔ/`m"D HpUhvYvq1`ӓpy[YXE*i׾x|4 t El /*]di(+\>42=lmX|8hRp/A d#Miw4A,=Um [8V;ԳJkPt`l: { сI[mTz9:[G>3=(zšňKY"}CV(b]ؓ`Rh>o&3L-JlOGpɏ~&Nڙ eՈKx@S)k vfMCmj[ s|'wgc?u}wR9 B+Y J jlM &64BUtVm*vb*S7 = 4RxNz *#)it)T`+ZɶD(C:;Z'ЙwlJ2oǽX,Z|M胂}^&u'mcm]o>gv䥶q>pQ|m99n7^ !!J'\@OU5uYUV4-JJ2󌭤9..οug>[|6(./kmwfKoˋ`wzQx[]tz߿g;Uڒ3kt6 owsA-x[qtuAWw立շߞ^byM7츺mv7+ Xo{q立ڥi;=pvϫ2OnZrV_됹W+-m7Mzw6E]\Q}p~I#KU-G[M;18;kYe4;O;a}J"NC2bF+Vcb>x:;VXBq5LW#j\q5L-qp5Lq͸oՃR@Bbj XqUZq5A\I/V NW,Tp%9W2G MW$X X#"9XCU<"PXv{ۙ}]uS.qf/{;.&eI ͻB[ПbuiCo*;@̄9Rqr}pe|8>Y%lY.fo.|\2?:S"l]-IJЛf>L s۽0nFH;~ƚBP3CƐk2[~<>;.=dOߧw?=3zMLQj9&t-zWUszfw9)u֏鰯yK d[}*-:vU: *gSZ<$^a2buZkc:jIB}B{VH!XWgjrB[X:RUq5A\y%4$ Yb*YVk_"(۱\NA#j\3dpZo "[rp2[JW$X+ X.Tpj-Ď+V0jt {HWRJ XUC2+%4HR.\\mSE~2*˸ P8ۛt@@3G)1< m #9寳{?=} wl D8?;iՍrΞTG{x6%-t勖잪FYe#U=sn*7VC]UTADSx^b@t2oa]:3zD2 VEo"JmHq5A\)m<`Bb%+k}*bc QiEl;W+Z$ tw͕BZKIz$h%h; {[M ZU.Wg* ZPLjQ- =0Fb޲Z'c7oYely(-%ktB"NArHf*Wqe'+l2"F$Ije$RAqe6%\Y^Ngk}|Z' tٺ"ƧFIǺb~uc1f\MW0)+ R++U{^ˌqnU\X!ć rC*Wq TiZRbw\͸OՃSf|R[HɅE;XWĕtFKXU: XױT2&+@Xy}Z[WLq X^s%4l&”f,x옼c"Z*vUBOD^  2\\ƿ_*θ cB"ʤIrQ$_jzl\~f\=)?B>Cj XyU:̸ wZۄpE 3d2 Uѯ]JkWOWc\KAwj~;r΁j8Q*W0Wquߪ!T їڇUբWRی J ; X-q*u+%a`59arvdVq* d\MW lJ"bujMbVd\MW1}m͑sPѸ$Րr#c40l$Ǻn@r#4q' Vsj3 @&Kڄ[M2-U^[V]6o'h20æKW,WTpjc[ĕHW$؏e\Lƺb6z7V^q夆pE#̮+kq|#$Td\MW^I )MYNW,טTpjW2\WJXr.Vr|j{;ɵf-ԺBb RDdg\ɌV=(` vdpEj5@bRe\MW.!\`#M2bʦ+V;Va*͸ JA>A=`2b(R&v\)TB@qj;=GIXr;=ɰbFyFޕ{S1=s-gŜw 3>~ //UuT]6"Eomxg}#kNyCo*'e[F ~y%z7^snJYG7N,Xɋӓ_|8! g t[TeEmZu]ͳMm|d/;\^ͅVag#R:6\ S)#Je\ݷHAna}:",3\ZbT6jĄpł=$+IWE+V}qc ?{Gr]ʀ/C֋~X$ʇX cCCyiC=zYI͗ziBHstφ:ܯLih=t(HW$3M|f!'BKz"7j{qC{ӎ`!tt#Hp0J:6od6츜ϣ:6)f$o;`̦ǫk}JI,o_5&1rsV/WP{( ~z< Wg-!Xo^zst[ahzs?]"zfثG;fpo{}?\~iA x ?(hf*_'r賟8&^jy!Y!q^v4&n_诛8eP:1m`b_}/ #`NF#[?pp܋_-`7 9^|D;Ϻ~}qk@˶&G6Um3~ ɳ)onNZ hf]JGеqZZj5댢0#ꀃ ]r.tJtQ*^9ofb6t1Z+BoG)- +t~FtJU\誣%wtQIWA*7'ü$9uђ:xJk^]}t)(O? x[r}Qpݾbͷ? W`n>E\ o͛mPN1Y}%.6~تGgo{_[66&zxoxvqY;|q^wazwϖW%e:XKv[mV$oxwŋO7#߮^~5/|һ]|.Yw_iz?sۄm[dՎYۙ>Ƿ񙪺^*۽<1a ||}V&3& qϑPOۿ7K WuYW||Ob zQP"|Z+[,*IY-RI[iGi_o[H F|m}vsQq݁o~Xn.x U-,D͖@շb#h*kK� J,&xct=}I.I_(!9!1k-b eq1rѪilX$5'~le_x.y!5ms)j!\sGokM$4TK1G€QB';%ZWkPk,T2&wMU*F I݊-Šc0ɺ(Z !hzj uuTcԚ1$kFH)Ldit2Q0URB-p>"PA2cRV"16`ltʪHR,-u@x e] Qߘoj9dYZHc Ӝv,B҈&UBE%SjvC~ާ\1*ciͻCs1[_ _[ .XD!}F]FLI,CFH[ҁCB'dHXlOĹL.VkysY55ksvDE)Q̶h:xѬɵ STBf@xhԑ~J>Aэ %b'X5E_G8: Bяk:bBZq mstL Vؔj() AJM!!S`6ՖLpRTD@ (X䓴ROZY{E/3ƐhVbP[bӮ,27F L.9(6Q0J!B QD \.%Ɇ0ѡ^& E2Uc`!L9%Xr E@ #1٭(Pd8gVNRۆu\qJh7ix(Qj1PyUc-}(e$cenX"BՄ61HAN]qah P i\Gg*){u8ܟH7 V4mZTRZJ?#9TUFdqzX EW JGآy , R M a[Mnh+U!R%b5e f [ SY,(R_[NPV $_LYUD1+zZEk(PB]A܁IBmTzCkNȸBMACXF@4rJBB( 2Mr HFD uTBߚr `FY/,uw \Ƌ~k y TAӶdd=2w ڦB&a15BW /pdTYYtRJ.G[K*d`1uLA-EMp`MM -tB\XKhjIR|"2LԴB*#>8M>)+]KzR{*EK*BzP:G͐ B@8Md!*T?y*ΈJʁdəЗL)q*@?$N'.$7Zt:]l9\:]ޙD S4 B8 qt3(ih(t!lL"wP(r2(~aYkl99%E.YwTYȯgߣIeFhQ"}? I 9)1j~(CGZ/ h'MJTAJvPX&4 $CB6 =>^r]ߴU7æP >lN7`E]Q("VR2P$UAU;'Mz׻(/V1^SL R-1@jJ ތLY$z΂ U K6"\D:ǀTm:#sq֯@XV=P(-Ag/ULd&!Ts'k?uNu~sP|P?jL1A%V$p'QvA .L`9o|bFRl-V9__Bޮ&dz5]dRק`Bzd﫫5v:yJ*w,OW} W_Jq>zynzшpҰtsk+>cq)q8bX%3af)RIw y"H-I J.6qRGi.6/ Գ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; @}H++|@l@bs(ɳ%:z7$ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN srH='j9'P/@@,;^ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN Z  g.}!y@@ ;^ȣd'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v9>^Z;y_jڏzX^_o;Z6yu}Hzq pL6n\ڟ;&0.l\1.#ԇNWOlt4+Փ:|OCa=S/O͈:gCW ]uZ:]utJAY;?#Rl1uUGs_NW%HW$dsj;`ogCW$n.tptQbsHWRݻwm(/kmH`܇nha`p؛dwMfF?med'JL߯IalѶamIVwWUAxdn$$T=˓pz7woja40930x #'@Z{[h;2J3v[i:? Ļt MNS+?2gi9r\s},JW S$+qtuTBQWc6 /h2KQW@-GU^]}Ja4Ja(}1*^1<"Օ~<=KNRM?&<-f&(8{42!\^N}@tÌV? 25S\")[LTTKP?=D棽%Vj,\>:IR]|oXw;]H?Z =BHR;,k*[52hMiQ]/vyzObtzJg)7aYFn#JAkFon/alM:|ާ2UXR? B]?T22w#ڦ; c@0($anfʷe.ǂ0Â+ %ZW28i~6aɟ϶?mu> ?ML:z4*Ewx~k hiYMףPs&S?Rת!,ЏE4*bv6*'!q"b iC zO\HFy L= rAjId.ǰ|=^g&'1EQRF6]/Q1TVkI pmf~t1[]NGEtO^gt={#DW岗#.h&rDiy}| ь)8% p F22LݡsL:#j'J%ZHx)OUj#)s^!gc/݉nzP3yt莥C2-Όv4"hR&Ed_\_qH}7iu&ͤOg ,s2c)T—,(U0x)`)vHG8\40a6ro8naG>7:A5sA3^({lۈa"kh#c' j`]W`5jdShU)b+t@Hmojc.*5X@.7댜ut@;Lt {Fotrt~;ͨL\;a,8U)3jN]}6_"{Wor8XK*e1.b7`\{tLHƏ6&ejq5l(D(B!(ZXJL!0"~ y3. v%1KqL b;\J"OEg,( JY)8’E*ncOL\c 1%0luFt,,P4vw Thcz5ɰaȑ1MI>λ4woZ3G8Qт(58$ r ڀ!D酷+1hd`;2x*gnX9[mo<}sX;0i?ѼHɵDzL2}Mh".CCwu/PԵe7w ߃ȎRJ< ph{L|9uodSd t^+=R+lB+"sD<"i[jj!1ÝF01f`~\N _M]U:1]Ul0>RD) n~2+|~r%;EANWj)aBI ӷRnt99~WtT"z\T޹j|..__#VC4:]3_fꚔ5u$=p*"eHR:jHi 2&#s6+A`jviĒ2 Js Չd_jfv@ ,y]<{zJݺ8AWe9ŒmLOZ) 4wOg,9uĪM=t[.&iqϾiLBͷ s ōB[F*5 +G#r03/4dNcT/%hLMMo6ǯvqRՏifTb<uݔ[LxdKX|%1_զXLFiJݿe`JrLBm^BQmn÷M;޹z0ltnuG&MF)YKeVw/6~7ߘֆbUW$yl>\?_qov+e͘ODt{%tQwl+c=$TQbNѿT IUp;oB!gh՞2#HJQSklS -Oau->ky[=‡d0vik0O g^_ۦ''Oh=/M߉Ī[3hk_ ^/ |#Bf_lQ.1㒥b*w仩m=+|z|GǏXҞA6?ZvC%gS4aHu _NG_.#/=Z+r]'֖э {FHR9ZB4 i0aVN•spi23oGUGÕE~k?J'}?. ؏Jb 2W`r:b%| 6N*0u&p!CXE,6 n尼iVpZyaȺ ^d}u)?6q'يm>LTswP|1= }ݠYL K^׋-}݆gPKRSRr˳A{L=N>P `"Kl` "g/4<"EK @"ȀB FC[ż:E #L4:dXS6g۸"vlOnnڻ)v[M |ƺ%W8=Ȕ- fD~y9w$]'erRxBD-<#:j>1r2"Jg+ ZjB?xf߬5*/q=O'Yjp!GF8!L%V|,~ O3"\8RNW I6x~~*t2A$sq26!(A1>M?A lRko: Z5B˕b}b{ 1--dQ'Bd#Pr rs&P'd-#@W4a?(P4Jc,E)ښ"uI{42R^ڐV lg+b> n]hx;SsgK ~x+{;)zn&ZGwk.o2/w=$}I#%8JKKU6ƃaRBH9&Y|` >wف;6$~=?]ei皥ޙMZNBɥhs:V'>PǘB!MdT,e7ʉtf8婍2Fprpx(Cd(\z!(/&ΖafMhj!X.KEB4wnߘ\eTd9>wiNޖײԢEgV`7()158"#NN 9De.]jYg!3%#SJ@\i^%)czE(yɺUeSnb,T09*J$8*),gI1?n4=ҴD(&Q5phPh1)XSˋK|g8N]2 _MJhj4T4w'HQ!x0@nY:$9qufp&EiCޚU2Ϻ:>85gK=39ٷF[ ފig9i?v.l?;};u ќi~юBɲ|F9?@ސN{%(;K׻=^Ğ&צjIUG'$:N(OŕҐǭ` |T?6ʘul?N?ΕH}G^NW[S䣊qn#'qⷺ 壉H0~.qx,']g&wwXoQ\yS_xu=|z})LKOflvas׻ίQfۋ m퉋=]uںaڻYT1[j#G1~v:УyἵWF:}ȶ^[:ʛq:GrC u8'3hvr kl߽@u^W7篾{{N>߯߾y+Οpr"{|^]R]cӮA-SO|~uGn#|dgKn=u~|.t;-Rx䃖ٸ|kr'Lb4l'uG?iNىT.螿 ]>H;>^P_J]6mΑ~##tmA%oDL0WDZ\h5QG SWճ!nx%/K$3"48PEEs>Y& oI$iLgTNXȘug]Ov>;w:;UGq+bMH! 6UYTXc@\áyOˇr)Mx*9@VyBK)Pр)8n INj Ѧi&F쉂 :"M$MF8Qp1qdw}-.EJ!gt~]u}" \uq%iSGut4U?*^Ğ9J3ӘOm2++8Uʥ+-&}],y/)V?Glv~C3Ih\6~1&ԩrȀ2h`8ϛ7guaMEf代&_PnMxԦ;⯓ 7OROɝu//cf09I/R.Nlѭ?ֻ-nZa9zգ]-qtinΚY-^KC:kE\5ͮl^έ~Ш.) ]绫pGݳ֏f YD[ov~͍Ol!'fkoozuyyoC̭d O\qtTs[=Y,S-7ëcBȫ7[+:>n %Sg ~7AIɹT [\̩Hy#&#*&ъt/|$_CNL衋POh8sDN^S!2BtTRր ^Y(Yd_HՇn}1H YV~d.-'G\ᘌ,B <eB`Q%"5Q$ȵQVT҇&h`$o1C}(`'0),9rTJd'2T*n +M3Ց [{}9m";Fr(s=4:kAkJ 1PK8ygLH[!xDu9ǀy!,;Z"0ri['Bm]`["sټh~!ڕY6rmwW߼?-YZA{7, &n_ yo-kB i15!YyNgGsao da'֦ր/_4-/_ /{ l]8'>dt=O{?iگS8p?)B8yQK& d\z]:߽Y%T A$^4 [FYd, IϔL 1]t @X@N%eLT&$3⩣FksN"Jog6]+ >FW[qlx_<l$lKPW/GHLS!垣;mhV_۫~hZçJSet./"U6XIyn1>\UT~@UQA8qYb@*9eILApNgU@4!KY<$dRT@6/{ 'O0Dy"Zjpڔ'# $'g4^3µArKFXtFNa9)8Q$ ABSI8RD8'j|?Q !L ?r>(Bcgn|my"Ӗϫ&nxٻn$W>$sdHl&X`&OF׶ӎdnIm>Om)Uůu\R C@,$*a4# .7 p"tEBMM_~Oī9/>;ɟw5Euէ0oVpq?Qѳd)~~Z!6[f9ycLNyS%_vY 7Þ>^T|Xh!Y'oW$dZԯC^*_/?1MEeS3 K t|v޵rgwiD$H#x[ &x*!Vjo+(Gx V(&CU%5^%/R_󞩔Ž+ }[7ꪒy:RjTWP]Y'u?;(cRK4tuTJ=uFuE+B%wE5zo"IZpjꊩT4R]9eW.O>m>ur?9gʖfO𛚢z<_񁳃wDp[Dhqv,94JYR&` x!r u:=99|߿[nmM9-ްxf2/<:usXg1A TdEVO0ћl0>:z)B?$ҮUR4v ;K m.!!.h Kc:\Pӝ'8'd>2>XQt )VzdP)6XZDc2^4%Je2'IiƀX%"C֡oQI~ /G)ߐu򀴖els|rCz(-ݭDGPI(NCε]P⬣u"XHC!-x)eQrI2XI(g%Y(H$m)١3A1xҗy܂dS9xeAsUY'(eW&F(B9[2&d%S9;Y/W1[[JfutAH V)v-S--bb* $#+UZJ`@QZ E~fY2o4A4е'F9xaJ'- P,4BfP+h'=/~vo,JBuR L>g*ɒ>afE0dFFq`zn{R^jVKV!x_ɫ(/RHI;[~)J!+LJmz F!#´]eLf,r_k.%ILJ. ]u&"*0&1Q<ډ%Tfr_h kt^;Xr瘑8FUc] pW}xl݊?,GMNB c2Z6Kq)(u4WB9bBY_C"'\ˢB[ %#12 mЮ9C i@J4-z4QjKc>qE5nYd')'/f6oخ[=:%)GYo\}I#j74JK46hܵ3`?74rѨv$ r٫.(,N-7rkmjjId%&V<阢9>KƲ H6ŢLL}VH22pHK*.Q+0ڃ/m 5Ghl5#v[uY@lJs3[1ז=]&)Ogtr6cF{qqQWMC >]bڟ ix7u)?WoY~i!zӓx~ou5 nq ^)̔d'A2 hB$ HQ&c+LhTt5TE[O1$k߷?>;~_?˷?IKA/ٵXꁟ~SR4OZ},NϪ nDɼwk2E\TqDb;DZ8>pDSYP9b EVk̗d퓾պͿ+P"vgeXP~lN=cM<,\CjÕJ2/ɯ f;h@{&,E:ӓSԗAsr [=؞o*w_^eպF$,n؋i=XȺ&ySO5 s3{d,̌#0/WpP07;ncoK8=;Ҿ8=ժ6,b}z˗񘴛. 9Ьj>[4μ֕bW"Zx/F}31󌘃UxET@wR{ c!E R(%_F"F/KglXQAZFWc _vydFdqN$S3 Tyo|#H`եJҫnP=+bzl \27 ?ĭ~lh'ȵ@QݳMo4"Lb5C̱@) B]YNt'-MI" NV:;=!Z֟0R%t)KE>G&D>g$U 1x٤ Y>BND֭%Zcߥ61`TdML5*$]B97A˓ % D2:=Th^`yHc)ҞRD5 Ҍ H42$a X S2)ZΨRت% Wqz&_tr=x;\4/r7M[?ߛ_'_E}] >[z@w֥ӗw0풡V@|nr^=eT[ yЀ٢ ($f̥(T_lYjo=o: mEKܵBT٢e.9ݣ͕"etDUH)j%2d各Fڥ]A, Im{bfBǹvsnܮWxTѺ*@UWA2N\DݟxkQ`O;؞NDK'^ÎAKdrܠ2C%M"U<;23hQܩL h4gH\E9Y: JޅT YkäogI<^]9zJFY<ű/o?J2> % h[imu*VwVZMs]޼~"Z+kt]^eX,.T칆.+IeƮ[EQ Ά ʹ?OTeOǸ[I Ra]*b.f'B$Ԝ9FB(ӥlRd{db|*.g5nhߐ) 9dFG![*:k87$H$ZR 0 <>UzS<16}-u71ffEsQ^ƨ+DژRK(F2Iؿ/zhR2#4eH "^[L u+H)Tx6rgڑ?IaBtQ:! {`I?] Q({'m(G3gl .|87V,ucr&u1-?^}kvzPZq&iNۛ'zq8ǹu1 qpVr3+p-@c\2qtNxg3<ٜ\jrtLdj؊tM6i'_2oWkOyix=_Oj-trrԲ .i]Gqh?:$GMLK'h&3-45>{su}WcbwҮ9<5k+ڮ ƓjqG8~ xLm6`}ufy9xR'ZeG˅9<N0Y>!fmyVb:ijY9Tǥ_&X`Em#>4AA-jn˃88#vw߿7߾x߼¾{޽y-8c,ZI0&"#h꿾]MM-vZAOX]]>r˼a\mIvo)\{dK̥Deڸ8`N !%7Ӟ6= Խ*C<^; ِ,T$`xR ҈H c"f>eRx1< 硞/C;`U4a;N*'联l2e B}|T5ĕU6z^vQ8#Xyr*,iEN|{>tH!|ޣEH1yEۿe:GA$$Ι+2Ypk%Y2zqΘ9 QềVs1h!\6JŲdPigR 9ԛ\5F 1ce%R2Zw,>O,y&V+٩Jb7<}K'!g>y\TKQVCΗc6iФ✥;iR4>kap#]5DWeܬsI->_ȜQ Ycq)s%`bUvR%YrQKkBӹ!PǔhO9V8NHH M sS"{>vc3rZ "k2O[x{݃ЅLɺ=k _i4]w[zlWd>A?jSW]ˠ*)e;]9rUvS7Z(&#>WߢwRhac-0Ϭcf#'BZsUɦ802ő"]UR +&2@px> ZgW1X+"&!1ZÒ2Iר>:C \g묅u# zb eHч8rYJD49qw&agt0,'zEAĝD%9A'ٳU,|w /1 %0 qrqvFŐ6ҨdVw*MgӸܠJ1}lNukzIt%8[,bV,-fnz>ڠQTmk 0bYe]o_,xj6#0 )ɘ=L~ك_j>1'y8&aD֪pִ7,AӚᒊ?/@BQ٭s:H爐 s$,Kg4rim6XK<7o<^)n-YaEl짥)Cmd-ބ; ⛖6 ׳9rZ+K]^%]\ۙȭJ:Ŭ>t%qObkmVQ]?j><-xvϠ{5 Wjmy?_tma;Ci>țgޅI!y֛3/qϡkO%uŚ_޺kks{>k-쫳E=Wo*('mOtҦT-[lzsZ{*%{~=Uyìa?l>d ҪJ\yTX阂)D Na*Kf R B%z.3Azf_'-*HLƙ; Ϩic%LsGBsr:"A E<.X/,v*8QrM]D+^Ћ#B !1B*i *c jg/xZ̨jJV4NoBQISO+]|_dܒ\_6h־ &Iej0s$!̚*I(RĠIEYP\)fq ZZb$ AHQ&a!hHFd2vdf9bf؎5vgܯdLCڝqǡhmknDx#*Β"(PZZNJcpgǽ.+2c  r +:J4k KḂUbj4w;#~}XdE#vԈu5bqƂk4S33 #d9_g9}d1B2K\!N5d]ֆd@gB jH1%-HoJ}9kΐ^u 8[ZYgg\P(:֋׋^2'aCR$%͋ܐ)D 1&.zqzPagq(PXst^rltS}luQF&lpp+Ǭ `YMѻ $(̻c}feJiCFX՘E'e!pi='Bт7퐏-lT2kZNCra[rOt`zй|Ғݫ)ʍo`}`V6& 2FLmsk!{tBGRYVm^ {Yv_2T X;kv*;*BACႥJ|`)QԷZ YlBRM u]muҁr2tnNFLl<O|Z< ]+]2Y&U'|>OC}7dlK*<٪f]jVZUPJh}Ѻ~p%;gWKp;WZyjVZ;jE%]2JKyỷ/əIPQ,E?-G'gBIy9f ˞}{7rOUr_aIG\ 9cU,3j=_EiqtkrWG|>q^Z[Ndbh|p%y9K.y }?7265 S)LU8o3..79ouNI*ÄF)xS&h9F{24;c;g' :9=1\w up1\w up1\w up1\w upݑi*Qi^{yAL#:ݪ{,uy~Y];ϥ+?ua BLԂUP6Q$z.:FH򩯜X˥VUtieNHo#GI[ Ԉ:̹݉/XZdהw8{ADB|~Y+L/j2Gw[Fd+RxF-kblZ֚|f5޲֬tfƖ5/!>ч4FC}>i!>ч4F6ч4kxz CFdHi!>ч4F*B,qt;]NAB Sd˓%Y*)c~^V1/#^܉N) Qq5h6%j}[NNI: A'CU?n ;D׎jût|ʂO~ܶ%((ɷ: Nkk.AS+2^V*owuTFv5dlup3c'`1ɖ佢Qs⅖ JQ[偺+@@WY_?+WyJ~Z߮zBܮJh@; eQiQVV҉ t%bZJMI5u'_[ΔRCP=Z[b~̹CIX7 m0 ,|VXxEBccZ޼*6_xpp`u[|Z?X8auZXIa0NHɉ/b\ҎU x |XCeh^,LJ`p09,BUb; {+`\\UiD/snGttX.VǮv=M+odU9k- rE+`cSUT5V٥VBƜJ@0CEgíH4`S,A5;a7sn9?oqo+m}lVED"DiŻ -hYԚ 0-tc" 따SI!CDH+Cv7L9K$ MM)ޭ+̹?gIpqu.Knd[\θ.\i˒M>̺X#Gld{ Ԍ`r..hC+xmu vxQSY<Q7'~сY+{R\`rk$x B0C}*Coe˫QmԘkNA&ZaVn$HjXX6rM 4fim0覈](Z(Y)V;-'߷N&"F T%-x 1%ZWc#B([14U]͜ouI I1LQ(wa9[R%b5TZH/k3IS4!e%x*xlڊ˶%k/Ⱦljb2d۫um$!47}5 SȖ)P}Sl/DK0+ߐ'JYBĜMmPSMC&BBS1lU9APF:ڹ\C}jޮZC˴y :RUBʏ,c+L DM/ɋ #jE䈍ᐘRP*x` Zd1ath?wb7sЁ(C]!_X!H# ڡ@֙~yMۤ7kT}VIl1IkJPŝ%l*y(S z/σ!{TTwőr`S5u:J@MSZgQ('vhN!؋3TKbt%fO%6<~͜;W7ѻ+BT gUX.aA"Ҥj11,MSjm tӗ@}~8XAte#Q;H^V*|6bB!WqF.-5⤧odUm#XP%:$O 3D>Q dUrɱI8NJO>Lg5kAogwS(dB +Rl5+ '0f lTT[mX:f㒧6_\nΈu[":_g!b.%F]68U$ \tTʶ.F^}pШʩdldmHˋWmc`s2͇Lr=RHQlD&@;q\}y{l?kӚnt?s>e-5Һ/R|"~1[ڷ,gA9G?\Y5˛¿Z^h,,8Ψ']Cl>lϔ"_\ެwAK:6y]r?bqo:۞:wq֍+sqy:߷]23ZTxќBV/N5 ۛǭ᧕}d)anCdڡhH4!*; ySrhlK`qe\mpY:n7)IwTك=WJ2Xb` >2!w9!:+L t$SN!Ē1Wr%ؤ}av2H/vhp0gLYhiü .%:Ŭ|>[7=Q0GB]){׾:.ԁqh_'RNG3%qrZ82 GOKu/9l5>~t R$K9@^ٕ\sL;/Rc5.+G]@at(kh@*!!I[ Ԉ:S]s;9L\@O'ܹ4L., lNǓrDPXh-WE kfp.λfo/$Y4l|-9?{xmǟk}.1^9!T ~IgK:.'%ak$a8d o;soP.ʢ q Dz5kk%b5JE4R:6HuI\;qӕ0=G]i3v7:"ywZ-hWf*k]9hh\E282g# yiI!AMxSIE:ymb%C 8ܠ딩<|˽O#1rp=1r<{gm#@CU[^ǻNmjIʭ)R(>@Jl1gfzt7z!hm 1)@ZfUIKED&>Hɵ@W}c ZźZԬu& =j#in`7dCR$pu)<#1LvG6fu!^&Jj EЊxpQ9"=oNۤ8hlR5SGts//>\ UjXqZPɲ,B?0|Nd <݌X)Y rzaj SvK]Rc!+r 0k{SNmJf߲XqZ6vW8"4b2IyMq7>D3UQWgwJCqYf+iѰr*"fH rHiϊWM,Vp4 x~]1Xͱ|rs}FF]qHj2͑#@Hq4g%l ;`L} Œ-MOڶ)Á.0zBaňjw:L{ $ԅѼKa'..B{T^idey`eBNKfQAj) ,t~s|xVuc1FET+N޿c?/{R֪տ|:L{Ɣb`Ơ]Be^.@TOۆ'|z:EJ{`SW7RnYzZ@x[0.DG$ܮhhfW Xvu, Dދ;wfhLy"xswlE D9EF$P1G.d;Ib1 ,F{YqЫ^iwyK1X^a)fɃsSXEn7p1% tW\ʜ\4H2oэYavX[jGoEԎ `$1,nK2qmA.XD XE:h|02 2jveZf8zJo('"5[14 A.cP5T3`yGDJNcRY$Tfi%1"@TAPVdiӰ6]@a(33R:#\D8tAgӢX,[*09{?oߚkkZg!$ T-M-q S#rN:`{2ıfBKS م#ap\aB^D4ՄnHdG YpG&L 4J|qᛓoa^YhY?cOxq+!=O5p⿎S3'ٯ?|.L~ Yǹ]BGHYgv:"vpaG B&+0C0EKA)Ȍ^'@WGRյٿJ`HUtb !ph¸ Yj. Id|_"p]D;i{=m1uz)}Nӈ Dʋqֺ =au$f&{ l^-a4[SuwVޞ\kkf:>x=j| [v9 G"-|p/5C%k[b!uDY!ETG?QLhxy;]s1V*A[u՚Q筎=Hjutң㐍bhlTCE+*@&~+637qٻ߽ͻgy{q#0 6Zrg1;p-X@f?Nc"ng7Eݮ  )\>lXWem8s#W^=w Q4y^?(^DP}[+emb# wOm$ED֞䌤+'kBIIHh0HlQ^^?^kyQ`k2E 6 aIRI\!3ݿiNm˂ˆ˂6]tNGԪ>l0a; qJV a< 23(/ [`:?y b`bҹV瑀2.CHVd_g6Nm_+Xdo![]w&5\:3f4Ԃ1gzy N5X4ZemW&˙9|LϘ\ݟ_/giz eh(|K[g|(T,[6 r}[}ԗspyX$/P6;hNw8h^׭Uꁭծ{3N~!(䫁Kכ s^KQyLU5l{U@ݟq^4=%ZPE>ג)E 0{M5Б.hu[cR;ĬW逰  @sE%qguMk|:ja]*:YRF+ܸJUݯ Pr`Y3qʩe` "gZSja-4_3. (k6";Z:#NQnH+Nc.MylhE)7j<Rr"[w9lVJH?B"N߲CK }O.6"/8on#oѤh^T99P)A'\`r\Ls9t2,➍їyh3ÔjzRF_U V;=s/^Ag= YPBYjb!L^c(( |?*BHj=~߾ZvNe:7\zV0#)6gF\s6;˜܁uWUM;d'ZtE % 5.F;tJJho' O=]3% KJpUg+@MԌtJ)]`#%yבrvJ(EJ]%Lw\ޙV>$TԚSQXF>7 -cPtzv10XUPW bLNW %=]@"NiW +BB+tvJ()2! u\̻BW -%mrAf{z9tR+@:CVt(zWw`~+M0. K9On/B@~<]Y@ fEǸGqA=a=GB >j#wC %mY ]鞮zeCtS ]%Jwĭ;6nlX`oJ! 77M@ ,&08$'F\=E3#ɒmKѤσsHxH7:B\1i gjPզB5:B\OHBB" J PuJPIVbmDLƩL+ۈexIX[fKhȕ%45?ѣRj,;P$) `i2-<*RwUƼ=FV*-xJdprNWgvTJiBIW t;Pehip,2ȫp3y/\\Lg*J+TY-1\= !V4ve %$7KBjjp^1ăj(Zq+TZV *mpFY5p P *Vw\J\!1ϱrdprY2BPec]%Bpw*פ+T{ݧ Wx5=nzvZH,VAUJrRpTK[R4i6W; ՚JA&MqD$+ұPIf 2+Tqee6%\'zB+P+T)l#ĕT3PIǺ$+TjDB7jpŶz!kyjqUIqUM-=d5Ukpج\s\$++u*BBuճ FhJB'+"; ꀖj*Wf[\858O`s6\Z+Tiu#ĕʮ- =;pAt**鶜dh0h*-t{zBTeЫth(a (dLdG.DF/A5&RRHBB&]MW+T)H#ĕ2P dwA-;yRT6:J\i\IW(XD6 U%WG+&!\` twQ6:F\Y%It@"8ϡCWMj\-wҤU|Z*ɥU5@IUS)k6v+7zlSM k Pѩ J+TIE#dprաשWSUq*vzg׆PP Z @-}j/q*npusyRG9Ho^IU%ݖqZ3M*G-WYASc<*eT۩@3YYQApT&2Zdu^본D9Z` u[e a,y$jC)  XPQr%Vq[wTjFri$W2!\`qSU4\Zc+P)I3yR&A0c d&Q ۡ} WRNS 6&\\A ZV{ Ufqe,W%+I:(&c]{J\!,Sd唖#f;Y 7`2={qҽt.\^¤{=2r om,.l|*m!~6ZѶ2og.\!^[}]y6 qedy{ג"BbA{5?Fyk|R֜zBDz'$뻲-eޖ wsSwr4LfDBsGXi}A*#-^Z04atQb}v7|pc(>U@]]FePw"'5uÿ+~(~8!ZɅ"o-,Ciڿ7/p6|ܥ=f~7ƳMΛQ(nTe1i8aߊDG(*>WM\gד|;/KB@q u/~{v +ki5TQew_P.  [-`; r 2|\j_PB)IY\[U#!{!89 {XV'ozsr6^L %wdSB.|%.Z61P OXD.Fkm a ~x=I"b+Gme ̗U)a-rC tPNXt>j9&x :)=Q$:*63UԞhTd!7\wS,Aq8-ՇyB eȏxպpD'l{m^A^;=^j3!,aD՝&S"IH_8>v 8>6ʱIӭ֥կ/gӳWm1r˦B̹dzr{mWGZB| AȦ,wt))FhV [!G WV|4\->:y9q1VFuMnH+:^T:~6:K(>ߏŲ$Fy ѥTV{UP&Y[qׯ5|_?뷧W/;|i(t6J;Ku' V?Bj_QC|[D->S=k ޮ>@xl/ܢqa?#|jt~1pݝ`кn5GG q= xdC5?_ UTJ@apbf,P&Җ)VuVM?Z&*:>R&#>hkwGiJ`y BkL2( U.>s6,.ju^Ϲ`g7eL2ա#Sq;6}9\C,CW}F( Ōg\375!,c0xKA3pٮ\Otvc-|r*UV&֣ La^Rװ2S V&9RP&3 !&HZ>r>xfS)˼pyAw ,l,qwؑ׶-CooH~vZGhS^œ2&IG9fEC=__Wo abq})vm}L^EEzB,F^t۷w SOYIٓz8~gKo#Kn wk!!)Xufqc(DdXb#:)<߆ >jgS4vMicRȤ28w* yf?e6P6)1sیx|c0=b>Q|Bk1#-{Y_Ψ9$,Ј1hVG{e %ָܙ[*5nBn,+С\:ϭε*94Q3iF1,4Ǵ҄lϳ{,+F9 V('V'xx~ Ӟ tTOB.Gcb MsϠiۥ,7wχ\c>#˓eδqik.I4gc~6g],pd.F839ɔ% ⹉(0A`=3W E95Y.A[.pU ᑚ@'ƹ={8s}1}JOh\dmcx;fӳy˺}5D;|4B 4"'3;a#[ޭssT׻~)3?u_˛fXCs?}{ bz^oxԱ?Q?(R{.^R߿Pz nx\:ͻg:n|if:vh^fxZy~ܥ2Kڷ2\qJ&+ڨHM+r-QKBǻ3w#L]1\vڅ¼!8Žqx_;RՈ(;))&Z6S =tz@kDl"4Ќ-{ʿp7|.H뵐Vdk_}- kdUub=Z^g-Ӭ$u\k kxYWϹV/5k2k~SU퓟`pyd"/eLe=e%X>bJF䪳%L8>)]sƑVc7ZS'6TAZKfMH9jMShMrJ ^5̜ *hdž.$/lK_d55M `Umڿ~?ܕ+/J}PݯL/x\CM_j{*_._]mc={qZg ^Jq^&61LX4a}0h:H9~>y`wY9lw{ۻͰly_>uuY:U'u"z~]nj;;NԖ` :Z}' wҭ8beĿow 6hi5ߜM[~Izl\`_|cz˱uO[>99d˾6ß?w~3uწf]rfn.e=[?;=Ƚ]OԻ0o) N`]: W_={ Ձ]I? WNo}m |oZ֕Oz<O_=pl o=\o/}xok57//?b>21&N[&bzm׭5Fٿv_Q[0ܶzRپ՟%߶C}bGwrB:(q^|] 8zG9E.//1Z_'mDII3nɁL?88K|\7A|~J׮niޥE{6WX0G4FgT޲r<'{ưq:]Sڙ^yh~9/sۯ9oXЂ$"cCiTdrdGkHls!i':G!V!n?ꟾn1?v/@0SUu'3iIRQ7W"в9&K@V'eF[c[-?C>2PI$D*E`Xr.F ͫ:x!BQE~f.?$#Cm:و,s.'C+o >U GhjnLIjj9UQJ$邷ŠsHr-꜌:b|8BruiwxNк($BU)hQM|-5%Vܔ3]_R5V`VӷSՠ͗Rn6*yˮDj9cer3jSVebb{\÷:85nD9q}_.҇d,!Ϥ1cIpmVS9c+:4tąeDCL:6w]/o"zDdŻ ئHǀ0^RP8|PmJ[0w%SIrDHhR:}CMZC P$dF5(}!+]7=<=%$deYND薤< 6Tl+;g,Xd*P?Q\p 6YD I|5>n秧zqP'd}ȢCK,-V XJ.O9g/pC*%:.p /shx#LlDS@ʀv 0F6 3IRc@Ak^ /BAvBC1M(o,58-7 = pd.z634I̔Z$&@?x @ pwFΚW{0a( !릒}p]5,j %dlE'$b@k>"6oX;CGY;Iʃ52f&o, R)M>VTE,¯Eiw ho(ݭEQ԰IV>oFt\`Cb{i6X2'snc >JBU]b7_1+4K@d\FC(ͧ.P,]0pE&J#xugWz%8 =cfW_v~7_oo`eLvbI-Lq~'z_`"DebieB\bF^t~6}N]YkF4wA4y;}R5?[4w~ qK"lvu8Mgu Ls#4Y UirѣfF xТ됀E營0q=p0sWۺ8?=޵$Oywn~0i, IkS5$%TYK gjzjjOؘQq'0-~9(]wsC>!^@ b(Qp[ 䢐|Z4\b0LJkRt(mh+9,e8bmLLϊTQSRZYy j:CWLe2zRS׀%v+F78Iò1m&&sB1%VCdLh\P0$%l d"s.x4ll@66+ iv}߶nН@-ڳD@OD 89@`ϟ1$?b B1K$rٜh=UW=lfޅKpe֩ˉh@*y diTbC+.@:6G2Is,0ޢSn *?ٶ7I@&!q,/aG\B瓰Ay ́(\SRe IAsXzZH{f^-żrxx_\~17/W0(&`4%t:ߘ"Qp|JHiv`ո`fSm~ D>H L@&4 M hA@&4 M hA@&4 M hA@&4 M hA@&4 M hA@&4 bAи (3A U>4D3t4J,_ !*-t!/[xn9>n!YTλ:%u +,I9jI*䪄u6,0᫁iy|_W+T#S",=_C60H4+uV,VCbn^ LV%s^KjSxf{{h|e_ ooM3XsDC͞la&!TMդ:''x\2|\7\ij-xj3qjy;EYǓ0إgn3P{o'wT%f%p_( d%oN|P1t˕k\|k%W\ 3|/s!h8RX!+|2`)s.cXrq\-h]M=c솞8=cX/lBOY[miyjdz|/Ox=6`J> 5V!*EHå`eQ7EͥNXHllK ad/ddB t;%ʫl϶Bnioʃ`vq5/Y+q{8s(^v k{4SB:yV J9!LsH,|]R0j bPY,\X2F( \5$L!49~zSfq.ljQqu=wh)-8(~0:u`~!q}qE AZ U蓈>79?\x[א:iɩ~Q4"}*|ȩ  &F_?lC4.*l5yp5Yl_]J`yt'}yJ+v/(z#xйN:::::::::::::::::::::::/J!yKx=z n oׯ <ϝJeG<{Dνψж^RÈ%ҶCd]s~Mx:-u$X+8vJ 9Jab'ꜛ(nV̜S;OoiUTLD!LR<,@:k,O(ޖYD\ 5Elo&ΑDt@Tn J2NL.0sS;>|,W>+a8 DvA;8Uql>1^:r=ɐEζ%8o?@ҿnqXd,!Ĕ\ƣ3:ǜrPV_]|t! iZ{#kEw3s1eMYֵC2]Ͱw*ia ~d`Y5"(>TqMR$ƋR5l wS39IضMb3|BUzP賜-U\k|a榌TnES.uJ$E<+KQ$3-mO5dj(hmqBV`m 8 O5W#j,*IïF|X@!kmc`W+qh>t@M!_$ߑl2Ȃh~4'ůZ5yWm2z"g\BQT0gH=Vb}c}1Yn; `OV1ss!iL-^y:7@T*r|KR֤E*#W@Ɯ݊v=+㦃=TTso]uLCD@ ]X@f\"zJRv,J.{>uZ])"4]:`6]P[@.:mH915#Gq쭪8Ⱥ57AtSnz?ٶXGHL1B5I*nK5>GEy%D\2CVV] 䆡]3qv;T*"Mpb_-A'oit&o{t?z |P; N=Qth::Rζ~>Ol9f 8*AvQ΄ߊי# 68dmN,WU;Q9r)U,"$SM!ȊShG-5 s888CMn܍8W{d1aís^Rl'1\[4`1c2YdTZꐹ^BZcEbV9H,:BؐdMw}' iaG)Vr|^{/R~4`Λ`ƆZw_\39c]"͕'!P<R `)Fчbt=g5~[5rR[CAX/U=׌ ^6Vŗ"\RSے fIu3!a{?ҐBq9#9b}w3xq~텇~H0_)*Q 2;mҢv 6c8mR4"GcHg #.$<9d2¥̴bdu4 @`±r5em,BZȊ?:ͣ7'sj>|% 7H74Qka6{"u}u/}uozyuUGULa(7]˜QϥśU՛ofi{ֽ,WzZN'}؉B:v \zm/]+d:_t.Jz׭@ݒ˺kd}>6=DcOJinѡ7 }QףVv@ X 3c9Ev -i{y X=vy0͊9zm~Lʮ2?` ȅ{;j/ Y:WrHK 3Ap~~lb:즞W?w-'e30s.YUYGU*G:6sI p5l/Չ,ReMcVgVv.wuE󥚸y LM/=7Z«jiL(-VhԕMmr⹃ .KND{I `1sV[ .qa2Zh=Vō_Mm[/ X +^r%dxkn9TQJR&9aVJNC Lm7(fq"jLia\8x_5`NIesJ~(M@Bknƫ78o~AףWw[˭VQ ?zcab >>H`ģ ݯڑar ?ä kä_`d7`Y?eWKpVPT(%mթQ]Npߖ|;,?`m${oX>G#^^O7XUMK{7˟F˕Y'Up{9jُ#VT,<z'z) ]gP ၻ+=s}B)%'wݕN;s^/k$Qs P$u w_=PwFjzt?Zh=%AVDFLc3h6],Onڐwtyw91]nIB؇Ei``{0*i,ZQ^̿od(RȔDՀRY'm+\#z,w%\q(Άi4xvqAu:sMҀ.^2뗟lȃMI4yFc5Bp'6R<ҔO'i>- P3{TAmNGurڥ>hڻsQ_ :$l[f>:Wh!"0B. ""dl!*PB_=EK~ܕ"RRr]BCqW$D\~ܕh!+-{쪖*AW$-glU]}ܕ)[;].SwER;$ߣ-|rRh;st w97?/b %?l`c.w"rbכ5E{',ˮ?~_uOg;8-||+m yg +gXۯ_ϓ$<.gZG4n6o?m0oZrFs/g_;.U=+m4N7RQ]`vQ|;Oo-0e,? l_Ip4K7=-.|KolO6GnH}>mgC36?.o^\ob۫? ,䡀""_1T{\E_x7f˻6*p!Cn8h,iu=VvbZ[*)Y*zfj;in&9CE>0 o 4'K'sQiS (<9#ĵcu;90D#PkR$Ai@zcxgRbR1 壶΃쓱8C0j {0&1:B"Z%sFPhXX.)N__Wm4Mc52%& RS"ݳ@DdO" %uW}qgW MKX(ߧl#t]~1'.=屹3<6kgFaF 0G)+-qZƀ l̂8,OZzoۖ|KoCU'Q.cV"<(kүJ`SV)kud+ݳW&IO[J5H`9SZrjcZ<ݣtH*ŗL$)KY%Zlr@%(>j#`MYHo9e1;OIދLLie^G#h-<`,!Tsyς;;&ؠxNک,X4e6yTQBS`n_DzW Eq[7P 4YT!&%mIR,+Y5q6֮3QҐqOVԃVS0Dޕɘ8O,DrLD`@NU p"9iׯE;G~agT&N!P +5C Pbc:Ȉ."€"d2 P Wlȓ^2Q/ȎR^LV`a9HW^%QqUj:o/v#`NhC1y!e(iPB=(ƌ~z+"MMDz/WME5ʝbn6)"4]~A7G ,+􃚃'X3uB[Q&3>[mPBabIœMIF9C 4˨H@PLeM!2j+A̅UHMd*0(jlu]h(k)ߒnng~}<%}F1%hKoSm_3.A䥈H<\dw2W"z('!NL I<)!pInh3ӽ>葈>m <ϛu^.U捴5`WRhkۂ]ĶzjSz;k毓8Z<8]  dz~𻔋}Өhka6V))KNumύ~?skjWE*}ac0j11eCWaB6J7c$&㶲uG!))5j)=QKxK[;OWgazG,,h./sh>V좱[KN#S7yr+t)H[wH~[l{O%vZ|VY6y6) Li[#]vx o\@=;~ޖn9I}e4J3tVIƂ^!v2&ѩGgTo҂V4s[}4nPq:MczB|\je4vװ!)~I7wlG}# Ըl)F#u߸ltǓذɌtqA]ol-ww17م,r)U`aߝ|i߅BWb ZMG3;3%ve%(&`Tg]\e^F;ZlN"]TUbW i}wUְ=iyʚ딲C=;<>>LL̔r'[?\gFn=f~t:m<}`<ݝz[4N|b{&;a`tNxK L!B֝kKE[gَ6b}*JH#e"II0՜uʙ9_ ̆e*oCU}V}P8ddpk%*(79J2vH) F(0Rc΃ӫ|Uj%%;TnXm9Ke\jyAa(; Q=(<wv4=@9I Mdl& Rx0v!oT?`Yz4 BbmÅPTPޡJפ@5)H ѡV0ep'mȦs&4e4e۷Bi2^B﫥^nx>>׿тMylЅF_NxtEbBL-`z36MrwC}ߣ=?'_=~d% _խlRvLz ZMY|7*gb7m̋D LO]VQΏX.f¶&% KeKgm qʐ)Da,pϭ1&d}iI)La- ʒT)vXS:m ?sN='_ՒZ@M|),/h?pJ!D]оs$YZ5@38M%b sW=/}#f8)Lf^"np9& JfQA$aFl ~~%g4##quЪ <}:X7ٕQ~3%U!>Ơ(hd (> x6r*vuh4uHd9W:km#I d~T osy0Qm1(HS=|9$M5ږ8U%xREoy6AύBE- צ,-o@oA>\{uB[ch\qBτ,k#V}?Log7 $J˪ Tx$*l$<B*7yЊ BDK}$'x,p%bR07 @IHU!΄hZm/W5@dZ"+ "'ZR%B*jڃN pne94Thrza&za ^|G4xt|+G;\z̻%-lďHC>v#}58N$hMFL@rBkAsJ|0 SpT7k=xvZOak̮9m2hOg]ꐟ@ 0M5Rj2L=3|79!yA׺HK7֬a<m:S:iAjQ'a|)6B?=B ]cFg!Je<\$d.B BP(gh{ dKP\9{b6xBdP2$@tn6(I;`)֡zz=wkEMގײ6Mg9>ZIur 1,B@ $.eJ CuF9ڊi^8h_F{=eK01J&CTtpnH"H*`@ q_*-CEQ.=&̬s.jp<6&Rz%JN,?} -3BkX^ "p鵵1m/AK`;D,J$HuIEg,wk;O&B.J'$^d)s,Xҏ+ Ok&ݿc ?Kۋ^iyN~ϗߚwJ Ayi۲ћQw5z$v,ܣ Gi UY*>I93n  Ǹd욝7ZJ.㙟L2&SRg8r_z #c<22sMKnuw?rmMiĎ˵ە/}ۜx?ih_۟kЫW1x`u\k9h̒7q>qA<,=Z_[gW~j/p;z0ǝKb.8/fk+ڮ8GϷo#Ι@mk5`{UfyO>]G<9NN0Y>!fxVb:irIqBFƉ>uuз\AQf^AA-jgqx}A'RG~|~,?w\w7?@;ΟhKuN̓I>uiS+g^[;\|dKRFQKm\r0'ƈ?Ӟ6=YR #}MgB5GZ;eC%zY*T$Oz_<)AiD1bB3A0NLg}@V'@&Ldzpϡ';ϑz{Σ.<̪SXe5;VF7 : #J Zi"5ų0)$p= E?HR?8Z1 LI*FXim`: R"̉ީMX C3zEAĝDˎa9_W~1uVl::=6bW>\ݿ2oq< q;xsJgڞ;]N7"6oJWFeEm:˫5:Dod̄ ̗e v~( ɄTl{/ٕj֚|oPÒf9 ~:˻L=̿NfN I7_0d TD|9J(ByHQٶY6du^^bgFZ42nkNg-\f{p]z=">hvÆߔ۽}n?O:xp7ED'S,zeK%;2*kϡWsI&r+iR͵/fem$̭aN \@uOۺzN7qjG~1ZWIYBΰ0@"@j.:2U?z#X~<{Z`Ojk*wDpG14U✥;iR4>3|V0l[4@wG4Ow"2 4 9Afƥ̥');TI%6 jiMURǔhOp#"AB+/HF7+[U#g/yܪHNJMISWnځRwt|הXB-12]tu?r®τ]\)[ nc~Üx=NOɉ^d`\f24nO);) s@QX(.IHoRp'8^=H.Gagi[-H}wٻ[mf/'|覮1`-kŀkFVHdi=%KYZiIG.Ya4VLdHVB| wБp')&iC&zOVuYXk,jɂ\$L,e0tP&CL*ŅG<& &2p!UQwژE ߬9f7Դ>x3!μ#fKȄ9omht&x2}Q`,A=`ճa`/gSVl*T=lnz69sҧsQ;[4?~Gh 7ΖMRƣFM!X>*]dQdE76d4a *[7Ik2pO g))}>%T~yBݧמ8/<:N";ZL{#!9g9fxP栄gt  vL *3N\dQe+gO <"#蘠R0XYP[PMt MpQa*T+Z%5*FZ/ J_GV=j hLڡ(dY ʊ,x@ 59&`S\X]c974sfFnXV qơНt҅kbwD+.PY[;␽gIHnsF (EQlNy'df%<n8JueE1E!BQ!]E-PX t`>x_YV#g>lA"{(ƾhjqFTFTOq`,(VA3:qɡzQT֋Ozq&SDz/HJ\bpCPA(!bLq]8cžj}(ӇGPaucolݶ S3kkǤZbPHf0H뢌L&f,6Y@ٖc  ݻ A٣ AY)HYw쁍 Pމ,GQ:Nk,6^. 9I9Z `'‘e|{fȺ ::7N8 Z8 Dc!\=Fx`|$H\<p^du9Kq7J` .@a>5֫X9IJ$ŘeN99)<{TgSʜ20" i#hݝM7X[ZtD敱!r},LVRf$SXJT_ʃefr?1XLTsc"m=a*9'EywٻFWzVկ7I7˵ +eBJZA{^+J\% 1,["ÚS琑:>6 t@W-^hhR=bX|r cp{sμ7 k`n`ks7T+I 'WZծ`SK֚׮Upj•#)j>=~d=S?ao{/ff~Noוps6?<\BtJU-jR8c8Q|g-ݛ.V;sm]d"az4ȗLeS*7B6Ѡt‹ DQ¢3cVIiBQ$4xCDq-GF˖WO?Qm*qV/}:=g?g)vR&ɲW~ku__{c\Cɀǟ|u4_~Ւ&_Fh%GQyt%GIQyt{;c@!LhS͕{UUzjW_ejM]CvNN_R~ymh);wu~m?lch]nNV}l'νt,ob*5P9L5Uc|c>A Þԃ_8,8П-OųYkdzgfBCIf+3VVFvJ\-o+T{ȆO2uUșٍ=[wsp*:zjRHA<"å Ibv퐼W!kSֽ2^Svݞ; sY?Zv#<};>zt8> |dGx7?}3_ɗ |l:|wUMFn7syNeU s94W:tSOt7yJk}q8;Eaƅ zJ~ (LJۚeKwzOrg%5QG[tbHNr{ ' NqB[PWMSkqQw \c`h#ⶲ}-.)aHI8Ev*5&M% R 4)AMkqȆJHhHE RErDt=aVfappkqِq|Ng~ !8݊+:Eɼ/NnMVw~f_$~-<838YDKEHCLN|eU\2 9kiMh+1% cJH+,:HTH> BQNQb$|N̜u-Ia3W︆)9=>Xz K=X\t+M^+L/fg-cLpZSۨ4?OӨ4?6Ө4zʨ4jӌOӨ4?6?O]ԏWZcY΃ӳZSw| t|Q-f_oX (?sJlUOt>阂)D4׹~E]- Zw1$$MB ZY`]qo횐= t0bNξݶށE[kO%3AYBE@qrG :j&0 4{@q1b%D-eQLĠTT U!6vf<ìG K?=%=Zflȷq|ܞ974{6l0BD ں]6dTkSͲ"D@,ɑV:lFFmMIVyl*R mʤ&-I -<[Gfl 4FƶЍpb NQY!y<;:|<||4?{}0!)Ӄ1_";aтQI(r}%3%$EéiLZNh6d'JaS4%.¹ɱa\ z59G0=IR̡Dfc(QۍQ{Dx#*InB\xЋb-CY*~ne޴-)c VqB#CAEG֋!jhIr|HE%Tg}x̜aԯ8l|l ƈ8FFƒBk4(90AY "epC@8+mL83:kad1r&d7%[WpD|QjFɶqE11.7y$S"EJ&F"$OTyNK$T1&.q1Pa1xGa IbE7v?>S!Eqx`N7$Xxq:;:bifߣz =^ϭly/P{tt@'JUrA2a Xv#+紸lGGDM+kSHTR$"tZ)%4??qրXӶOPkEQ=æuI)Q6% Ŷnmf:I1P$b!w *R+6k.XLRQ*,MʔYYN.Frg낷Y]V4;jOo~Pe8zE E*/| {^&h\L=M/8.Vr~[?zJ|9^SjR 0!%;(1`.+w'ap40 W \]0ZcjE,h,@ RQUI8I+"RRT6C$rL+Mmمlp39!LPk#OlT̜ 6Ls PF&_ [rCf@u5I똂Kt28A@q*r  A0 C|ߠB$=^~vltd IKvTY_"d ‹"C6[ % lbɩ LB޷2ahԀt (tط?17QOR)l rm[R ?Y<ك㈎ >:P|?8EY ];- J#8󂣠t"eR vk}*b,HND(@ֆ1 m/"uL};$=:1*(`IZյ9#a%@-w7DӋ6\Α5_1?ʑBI\,'- 91AgTTγ2A4> Gm޵6qcٿ/~ʵqqjl*Nj>lT.<%)R#Rv{>$6EQMJlnssBM4i3X垕 ,%f~LF0*O \0i; AaP5ᱴBI|Ծ\P9|De0=/GՌ݌3:zmjBw5DuIvT*n6Djx"Q6j p%e*8s70bc')VI`/R/5eDDL ` Xy$RfΆ\Wָ(7դOPyKv| v=@z48s.w/ܧWz(➐/kg@W+Z%a'rP\8+,Z!<MI;/!msepx:A3$s[a0eZ9J2Rc\ i,ո\^ yw!Y*nui;ӬuXW˦YCC P7 S?ȞRN< g'c)L_)'uN?P@8r JiZTwz"4W$m ƺws30iR ;"B\ e{oP¨Ri'KBޖpp1luUuN_7yoNJ$QVM&jKEW oTbg[w7lF}<[eMZD_WOWْ>mxgMW7UP*N77ž;K\/)mlU7hx䪇t|yYEU^宂ܨjqoڤjdJbbŚ&]y_i gk4G?(HlЪ8]hBf>K{<>UݭTUfwܼKk=@-t֯k\D+nOKdTc,MMۀv}}*"nǮ$OWfrx|7cܣ`M5Qo߱mDnC¯ ۔ML؄.쮱!9m=e.߳'m;s VF$ t4Ps{mJL(iRw5HD{hnTª~]h2u:!g~&ocShDCxA l$F0aN;lDy=e~qzD!+p<`3+5 Z@ Pr`1k$DŽSD0E%1;^^.{[R,,%hϸp}+d2^!QrKYO禟ܴԌ:=mMi aXyKť,E$&m;*f'K-m*Ҷrᅊp4d$m))s'  QHiAV$\؄ `"$ʨU1kQRk R5o-̛/'M,ˑ]ä"J) Io3Ȁ 0z 08[< ‰H%gT /ut @[LԳwD@4:&ܧ̰4N+DiH&{aCcա㞅Q~F>NS8"3"@cr:\)# )# L$(t'7#6HA!It * ٔSP8J9'F0XAX3 )N@w mp\aB0#( Մ7$2#Ϭzp{MK3)(Q2דa:jY40ieD0F?ٷ&ǝK 0.&+ N Eǩ>Tam&?YǹJ9s.`i }*_0ҚaD -@ySD.љbd.h2xql*aЫP& -,IV+F3s }lDо0VAXIZK:u)r"k{(rrr<\ҽ/ c$H GnʥGG Ƒk'aW 35wii ~~.7]fwXS531n[m8Rƣ7kafuդqgM颫U Aq埠YǣMГybV>dW *=uC;FS>ߍ\*zq,8GqPFњBr㻷߾?7>|D}|?3_pi| <FaT-kmP5g}>uԫ|zç"7rDioFĥȀvg&0A\]|ۯ0/*v9\Sĭ* Q4m@̃ĻAohM<#ske2y'@4pR{Y1)  & 2Gzņs<(J 'mTLsl` {NJDX teOgz/$Y6;;C*Z*?J#La\T LWY5 .3zU˫:7MH |P)i} `*OV0{Wȳf]kX͟hNW~]ME]I{x}6/Z\V;Ѥv'PAu\Dp{@4jvX :j3mVNON֞ʺJ-cK.'qGږX2tie$@ #VӶ<& ewۚt҃svtvmRwR<" Il$}pK\yÁ($<@x|1#,RR).I]r-~\t=դbCTyS/!/}QQ,}̤ѧ˨%՝n{:]z ;bi'GQf~έ(K/ںܓi{>3%0vn2K3^4[ Wߘcr{@zK)EԒ)E 03{HP]1F: lp6HF'g)@zb=ݿ}hXdp04xT!3A**C*?^H  PJd"$h` "gˀ+h,.ڗ[h2:C`FKG eT)ʭTieQi%ÚWv䀔r6ȡ(u3,"0EqGkZ )$o̹?(d>)e" Ĩ^ uώfXj#"CW+%C6&Pǡ^0T+JMޟv;-+^tM̙K8C*EA0g*p,9AA6ۢ"ss+4yt5eqo_F{ )%pΨlFgW:69x'c &ѓUJv Ke꬧1+jDr2]\,u @F~gIZWT*F Q8͒{\Ҵ ,)AeIA%Zhe)xyIViad6Luԯ u2ڈ2dqॕ6/Q8!U/OgC:d=! qY'q:4>X(4k7?5sU!ɘIyZOA,ka~HOrgmri.U~ElH{GxעV zuԲaǺNNwq7?9,GM\ҿx8.¨:6ΘwWٺ6S{ᇫ׫!1AĜq ߟV۹]q ƓWo9#Q S=inݬ2ZOo&zz|8y͋ z+3NSndvl}FߊOkrmH>0? y__̈Gh]5kT@xM#L6B^4RTY{[m݀.H7>h7^ڲ3m] md;X9m$Bwn#ՍtcB@`s)GluTPh a2H <6t6ܨѽp7ۜc? ,STdxhJANHD)f`;S]]<};eO3 'A`-׽9C`7t Ya;cVT'?oq= <95LphgJ9Сǔp"EKd]p%Lw]Qђ5l:d Ea-ߔplA0>y0ihZLTe,)I%=caH1<QHBVB.+k^J#E#GAX(mrG쪻wJ~w!^ 1-i8Γ뷃7Wmtaiyr1E楘)|@g4 i]gmҊˈM| FKoL4*ҷ.}=|C磨X?.~`&Mv)[`UQ0OM b{@Zgպڗ=|o(}d~ gq6T"5XZ7 +G/7Z27TDb:j(ܦ$|7+5 #FöJ A[im EY?CN$Y6iH(HkwRg hu@)C1(EX.ϰr (öDjbc*4dduK@.,REfId S2DE'"KW!Ne3"w[JQ}$ )&H.I ]k@u߀0WXlvM[tl]L_~ǖuZwmBn9ڮd *[ /C%׻Ru?T*??g ~+)ܺL{51@Tڅ"Tjt KSl,Akw4f*}T0}2ne3K0dQy/% }DNShGY-FhM7|L]Q^w[bY R#lDz;|:qR1'VHmT`piV mW?7O>;e*!ztO'5]kZKZ7>\p=cZɆ{2,5?U6qBY :3C,4dH:!c Vqt-G:i32iE"/Pc Q`!:@H L |n~eL*DBQIb6DI}./p UvFv7}- `Ɓ7S̋B7My^eIݩ~J6O|r2|6wc.=흕K|-ݤLi"jld<>@QWqTPQ}A\PZ-:Jڤkp  yT,_~r}HHկu: *b8ޑQFXHc)%ꒋ,]Pe2~w2'{fYE%}d.h @2q{CL ('C'mI;CBZYYdB&R:hklzՌֻXRD^<%U嚼a 0RttVZ4:2o95cX7]g Q(]xt"hOy6vO&FOtklAm\ b,JF%HTeYԁ$JǦЁl4Tc/"(jS:Y¶t`lTDv݈ٮd+&h޸Pv8j#XF6Rəݓ΀I dJ01*Fq&N]Nbd6ad(5[Yc]$Bj "QM.ƞaolׇQsz*CшǞш~5GxԈ8&ɌIVE(X0UB9&f`R)dHzՈKQ ̲mLF+uRPIdJ̖4hS;xol׈Έ_ub8DuH޸d_=EyԋGexI(c5dBu̓ Z@ @h42^| x(78}'Pa=|TںMApg~4٭&R9lYeBM("S  = Au@A g{(`eHP+N[r%'#d/5jKQG+΢UctlmtAt(H0Mkk {½æ"_d4+ ߹8d R.6DobAd3'I*z GI9Iq.eml#cPʩ"K4”\bq5ͩ2iV ֩5~{e5~So$q>eK̵iG"N= Št1CկIr2F3Ò1XB!֢D)0Y5<)C9a+P1x픥]Ҽt"o[־;]c;grjӸ iׅ܀h It֔HJ$ }OZA)-߅PԢ HA $Q`cK>1լTDBB)F#1ϵԦGx8 VJGG=]hY}2d!Ti׈'c&y/kqzKlQbAC amEʢLφ M}7rT@uǔ LL1RJ(>g! VRQ+d#;)} cxob V@/[{At\a9F]J ba0hy+G I66/^٭gA\ܡFt6C[g{{&\Tdž y1{\0TajB<=LQX;]-->똰 VlѐeL dS>]Yo#G+^ޙRy Ì s`<ޗumԐ(uP*%J,Æ%_Fd:0:0̆-G&Xfjy,hQahPQZ 30I+L w((*ǂ@H>$'O9lt(78 2^qAi!xҗ!OhfyrY)4ێdes0LNlP⥎Qet9uȒy'[4Z~{bV/S ?`tΉRV]-BHIU ERAD݀w,5sF="({F9xfL'Ť@՚g]Zlvu">BqI@oebX9跃}#!uW{ra\g''圱N*UGΤ£q(yKx%خXP[f$F_N'qc/h_KMi벭e(C4Wc U>9/㕅ކʕ $Pf*|X˯?}/P^v1(hS٢ pV1(e%h# #eJ[c"15^h-w/H@ B 2jacb a ڡZJgmVRhJƼqlI,AZ+i3~gi9Д^P' &܁O<Ѻ?^ &F#;fKM,q9;BɞѨL.z81 I}'+tFkaZ٦G)CρH)j۲; i7ْ/z<&~6x2F# C~\pC0ifN;>IZZb+Ԫ ja_Zkx׎vUX$uƯچ&#U3Og]Fn56jmcS|O_뱑R!E^ßN mۭkۚFt ^2X'>8\\|u h_4n{G[ԃM0o?r]bOo{]+ ޢ[ y^}M n%qqVʀ.>|Sq|g#>?/×ju7ay|\UU̟4Gnh~&F7p/R;wAY#MwozhaVw㌟nϞi3^-q{+CSJLfO=oj<> EҒf&>_xҰ_nbˊO 8q^zDs;6fL{G5F+c"KŕBA QRs†VgRtւ#O70s9G\D1נBH#h+*e VHV`,U86^,ZN%{іsU3dy:C=C-Ck+eqXto?d`\a2TO"#O\V3kfvm"z7өM^F]eflBvF 8os d58»F Z;}AR{\!Nr0b|p?GQg:ٿ8~G v!㖢Wn vKvpc}n vK(aGV ea9;$B]+Bi}+-GtE ]\gBWV u"vte$Ot zAW檃kWjJ`OgW<j3JmtvOtEu u{sҰHNWC&zt`v\cj3jGf(MtezsR" BWB Q*]!]0-]!`%CWWu"+C]]IoklKW佡+m_B+B@W{HWrAP3_"TY9خ_F9MO(?Uk*U>ޠcw̱ry0u&%x *" $K3 h*;^m5b>y-&y3˪d*s(D檠IW"DdG Όd 8~ǎ=ԡeRspNڠߊ['L[*2u:"J,}T'LЕRq"]B}+bGtEAR}+BkDPZ5ҕmwm5+Bd QrfCA|#1 7 57 |}RD]\__BlPAz?tN;dB2oX]kWU;.ֹZ#PڎerЕS S +k/th%]+BYi+}-m ~nz]ZͻNWҨpy>]!` "NWk] ]G{\è9a.lA*fMW- k6XXǢx#C$p( mcG-!u&hq1t'0*=+lCWWr"YD lGV:v-0ޟP+B:5K( jXOoX "ju" ttE'WOWXJ"'Z`?"|}+$X]!`%IBpUot ]+DpnJfS!sε]]mWՆh_;"~ ]mr嶥 tXs~vL`>\}+BDPj3>ҕVj%Lokp89?kBpL_sO?'899ā\~>V9HҲ gٲ``O]";onzxNsd2_IU Q7ٷhmϕ"FUwH?hI`:̊83ҋ<&ZZ2ц4OCGWx,Q?qi?턤czg# VQQ804*(!:5JP7i!T8?ٷ"xg9ɗmn-\Eh.l<?̗c5|HhgK 9iLsٶ&l>\S;Fg/7gkUCܨhcWvGQSƤ6b%ˣژQm/bc_KyV۪G_|hRZmxY5/_5y[_ZJ6lw@/N+#*lܴJMbT57¬X=9c/p,Dr|TּͪYZSkeš֬jfsLzng9Ґy;̏&TUN,~?<Z ٪rrw{佞_o6WNo8TWd2?OfGJPEKDVݴZe9c ㆗UY]aaA$49ݜ`И9UyYS}MoF1>^Od?}fxsEwߞE&!y9w*A#^{|f<68_ <ϻ)N)eUP/{OϖsX}^3ut(Lly`O9:D~p牻=h MxB+M)g \IBkrX=+3r,rjk|z>S\ :҅SoF Epf_};Xpj-b^a?g#-Pi=7E.~m%.;Y~;0>Ͽv|n'~΃dsF9zݓu҃}Le~rg#ݞEpMmzDI>Id>F"+"6/]"+JM"_Dd++}`e5tp|.h (FWHWnEttZ hK+E)i hI"`ȣp|rh'ϧ+EFWHW z̠eY ])ZNW2nw^$ܙ7?n8GL.k{VGGmtu;J{^ ]) k+EJQҕeY]+{;ǡntFWHW!"RɬRNW2FWHWB +5B`.>W?\ z{xyN Axl;:pI1Iےm>\ *a|rbypZC&lu0F7Ng ~@W_1$xm|v0[&F*\[E孢LDyD87X2])\CEJQFҕiM+Ip+ dBWVOWFWHWzW\ӽfYJW h,^n \ kZ7+ZIpAϢ+E?w$HJ.Y^`L+\^ ])Ol|GtezxJ"g}1p1g6ǡ3P.m1{]ٍNzBƭ箎ZJztRK[7+"`jJը+E++E7Dbo|Rq=fPᦸZkJQJD9娆{GQ.{5vlGw򀖣F(瀖#(r'W&&_nlp`V#nk@ /]"+Jk7|ج08z>r8arimtMG++֣7^ ])Z^RKB|oBWV:Њ ٭. ЕtRmtut i5^RIBW@hJQڍ.RbcŠJ[ ])\3hՕwFWk+>pI[]מqh&StuJs|]FW=%Y]pb^ ])\qk+Eyt(ltuteSLWDW 8jʦdjJڸtR6@Cfg8`kthՕ\FW߆(J>8^X|4{lO,F6n'-#uԧʘW^F][KOAng85qjyh]Ui^dJQZ껡+9p奡8 []Xqhq(+#J6:uɚOWJjJZ>ǡ~K+I65ѕ$&\++PφYqW;Tyq{/Ltpd4m}Xߟ.(W 1u4}@^UY[D(UIgdSz˿_?rhD}żvq7w7sC7/q[xs}O|rd8xp?拔޿Cɺ/KwQdʇ=qm͋BH6~GT_܋WgGe_ #PQC^H@xz-y!5a9s|AB~ȷPOw?k]m ~o?~ Kο*g+qĚܲf ij) { 3J1'.>C{˛~ww_ Իo['1oIz%7FIٱm!⋣l dPs):o I[ ɖ\bgR .\MnNոKi6qa#4멟 cGqYF#;Р }mMB! L˜D` .-: %z:Iƌ-F 4a'D34p' jhu[RMiԒ=MfIbۧTzh2T(Fփz alTPa 2&?!Khdƈa4Bf[P # Auq$9_dC̕lu)SAd*X;k|B;9 4*o[LG)F N Ar#QP=SwRk"7л9;rgɱ'Wsqu~"9c@F5y mCrs5TJJH5%@_ "$U) uR K -&`Ogk(Śa|7!'ȃCK 27gcPG$Xgа(! s'XZGuivt!kr@,-MVF'^OJ9J(-8`fRm>h-i`?ҀܧGtV0kD(QRzsPyUoڕA[]^ǖȭO2*.t 4<ƺ?K+5J=V+-N5.(հQ C *VD@5w˽AAIUBȶ)x_baNe\SpT0VH2)&t6G\) HpPRZD2 .PMl Frیb2 8B@FWsO J+#/R +Zq2M!lEP ! !$,( "*ڕ"MC^Uk*@x`?k;&u:TjMZPgN42\V r 6P8 /իY {K6Y1P(q 92j$XBY{VLy'D),d@#uT ivL.lY-( E>|փZGAcd^$lHH>04P vN@s>! Db dc5WVcG=D &O KOA;^,KL! ʼn1fPTԃ*ND0B/ } s}gⅫz뇏J5׺~`oYG.0mFL`-d3x:p4 .A[БMg%s$%@ۈAhLUcXuLΣ&i.|ɬ`a {=dHiD0iMPymE>XJJ:a^b )3G[(w2xhc`¬JrC[k=!Vwt&ʄ<|+@28Ž H HmL)},ZJq[SV ljdž@A !PVvܬe+AKaquB1àDM:PF.+h?tt֞Ewv4FaK8Uxp6RiM9LxLL7퐄p"` p_82i`Fo ֛.2 6`gw]? W۴^{ՙd{5A.nn8،ѳS@ظL2=6{$w68Y:Z:5WZsLdG0PFh uF @4H#('? 9صJC5v#!C{Ź"IZR$Bu?iJ1TDyӇ!ᅣw(U[T1@.>JH2Gb<RuN` tATژbkc2=rǀ຅6+2VjzcZ R'Fl{&=P 1ja BVxMg}o98ڰ1Y>@z5@RwmY~T`fa1&M0E"5$%[Yͧl6o?D6:n8pL k 6P 2w`93#@JfH-th55@\^b!Ky&3GA&Pb.pl[b~y% .d jtΆbUvys٥`uJPZĔ H"p wSXEN\N * \c``WW$HTv#DoaL~#@vUꅅo'uōbY SH'BMэO64J_r2_`;CX )]@ "cXta~C3e-O6co u|--)/> OͿbYr#gsli16wUfz{ f~R[>}[ݿ,Wa<>},m m%P -or\#/*\ o#'Ȏ@Vwm- tN g"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r Y.qZ ('': DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:Z' #'~uB^ hsB@GYr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 $@Ú>9qL p7;r:F'zr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 uiSz+u붼ׯ ~Je M/''@cNɸti\Ƹh?x0.}1{'%67tpmo|C+ :FR)UW]v0qST$Z* &*mRt\NW?T_,Q m'u̼T(@^1JMgJaRi4D`ujq}z~U_ux#v~=IH'pR빴KV3; #6)ot- UܗyT JD[ܓFIK(AjbQ-U27 >TAF'P\op{%c=+ݴ `?l< þ^,0=r/ O? f 89 'X^!O?J;{$7HdDk>GޒD>B? KzCWWfzst(DWGHWD%S=+lMo BWVC+DI]#]A:m[wJWi}o JB:]!ʝe+cGtM덺C+D)ҕ7>`so{whtTЕ~f;t\g_B+% =J]=v1Rv7tpkoC+D) ҕ0@n]!`B:աTDlyo :›62~t(EJq/e|^7OB3ce149OwnTݣO47f6wGFJi|ȵ'P>idw I#}Hm\L)&F1h@$6.>jR J7G=`]PZ.tAo=ع[EE’=By ]aݛ!վ/th8tvЕq>m`Eo l"ZDWGHW)U s֟kbNWRS%1ҕsƳ>U `up ]!羐tB.#]y~Ͷ ZBA@{te8yc/\v2\~h_B)yte;Xhu{W{Rv}+D]!]9V`#Eo h]ۓ :vt(EJ1uAl ]!\BWC+@i%:F~П'e:i;&vl :st>a{^c3I1yBÌ3sj~+-Nv .;az ޫGVV fO)B>",ٺߴW?/-/Gyt.JV)% H_Er=qmۖ6hE3fXPʴO.oݔ: _9c޷ߋ2,Pmy7m2#5 ̗}dv-ϒG=]e;\τJͩJx5hK? ؼflL3^ltq&jFdYT{AV iI'wq/7''0Yg,a=dgb\V=jI~ jr38QJEۚrBʚ(R*`լSJU cHݎ`몁Ju6E|s{@xs aPJ˵76 q1&j֛*eR݅F5izq}( ylt&Ƨq4!8iiVlr6'Z;}Uv;\o6EQ_EX%hhvA0 e5`lVp*Ss0R%M-u [6\&dU/[Y71<:[,.U\[%Obr#gslh1vUfz[2|w_@- OCгUuY x=l{:Y r*83Y0 C"cb+N|.dvP"mE\ Fi4+R29 ?gNo-QٻfE߬x`nyhIGU:1(\w@~m3 JM"43a_L늣MV,r%D NtJ;˲TS*I).NJr[2>[+A)-A@DUIIt,:s8NMQ4]v7qT@1aCK}KkJܿi\`L ֥:HZƊL %JrY:BؐJ}' ͪ7mpG!TsCrnѷ߼N~նO7^g܍Zq:CJ8n~;f;뵺]RY/9caܨkCaszk#m<\Q"m9r2*]8eݕ"t׉hE}:ϰt>ZRya3)Y ϖ=x8+,S~S"< ]u!QCL[ms'OY Üjתe {~۴QRYO]VMd=w;:êGϝn.+? х<ՎWExan0V{ͭ!,rAL$Qq^Rb)՞RK1utց#7wy@@u)輗,PHWW 6=H͕H_:Y 8͍'3_?\c\xk';Qۓ4ϔSyt`iۅM)>XUD_XAr98Bzӝwb8XM?Qo49 u4z!B)BHy`@"I"Ut\I:υF짨ଶq. psA!.Bѕ8{-V#bL &)`h -%*^jSrBFI9c2ќ^sᔦhvPnC2[e,r.@29_LpO5Zks-SRLdBk++H #,rYv.>^ vi]ࣇ1Ԅv1-Z>#G+l#UXt Og?{y9rq2BT yQi@D9Di_ Ywƞ5%Z{W>fv7'{tǕ5_1d4N…%r$4ҡTwi2*4a?(P4Jc,E)ښ"uI{\d٘# ۱w&g=N&ﴸbg;c ^3aBS7KNjJ^7KYg~cx%DA熰 9":]}m~ց+rI`D h8/II({a,^MRx rl'؁,^O{W4'Q UQ(j!*Ri2yy"Jr|;CBI'x U&--Xi#1(/˝O٧j]>Qi yvu=+ϳkn;W9q_a@R~m]mYw{1ZX:YALO^3xW7#m *JBRhEta\b4L{nzz g*>SU]'H 8P?.rNywDpcpuvJC;?@~ypv*E `<$2bFXp\ pxCxJ§Ϡsbi?1Z6@Sg0P}]t׎84*դBRHed`/##I$YBp h KfcL!,gymȨwk(8.V>'I`ײ֢MQiZ"*XDƥ@Cpf\driBR! (L\tb|eݔ56vl)%x4h,$m@x 5":J.(p:_4^q+\xەu鯼OD3OĨY)erP!asT*HpxURέ`9HY.ҥFM85PU4qS˸ph*4 mD -KCӸ>nOȄؿh?mȦ\36E!޹%'Veԙ8tՔzQn';8ˎ/?*g0>9!\N?:p7Q`#R_NF" 2krr$[ʍN.W'qd$ZYqzBD]\) Yp܆Ce_TUxVs|1?Nlmq>U֩ӏnpmEzmkګVSKQ aDúN^wak՗Mh#| }Z `홚ٴV&+?T.ޮj^se?i4ǖخGYeu~o18i֞@l骭 #6Zp6Lƣ@>WY1+#{?d[-ϊkF8ב8N'S)`y8't0qQdGƆ/q: wo>~Ûw?P?x;Οpfr"//p뿽]ˮ]s {t-˧n|~u}Tb˭O7ߌ_e}':-]v_AF5C7MTR})+s_j <@uT/ AxolxOyMGllHH h|Pt[ţ^rL0b b.8"-n4TȢ966̗Ҟ?ѼvUBo,*"uh,:b/*B2I~K"L3=eSıbxSOv^GɇCg-wyC.d!1 -n᣼"qqE N8S$Hs*> uTzhX/Cm1V <XC#w BDN HW l]ΕAg}I8x,$KGʨdwZ3kEJYOo/a9хJ@-T!0DuA&kr~۟MMx>hB~1/8gT-ǎgSm-T/{kΑ v3N,@Aòk 3 J\Jl#[ ZlkT>'l$_]Eވ{0#2ϛZھS݆ohK B|0M愈ܖXil3mgh=)kp=\̛2w,^u)ْ{dismo!wV:9UVqmfwӹՏmeSkm.q# _tK+b$p{rŢ16X$jdYؒt3HG>od]_۫k`posN_}}Oy1|9ܷm>d̫;~o}?\}̙{ǡyz<c׼|ג4^D0mc`}ι<8:}m0&Xz+?Qԙ\P!eCf`Gg!Igs!%ԣRH`ϣ>9F%ӌ>a"]Qg7Ibg\=V 6't~УiSc1iJ(S:ioFyGyioY6:J4k @oQvfL.23f 8 fYO]AA'CHgѣfqFևt٪v~rܨ9}x4}V71bģG11ffF|S@Ё0DB$Y=az[_w c?NF>{?ȈCƣQչȃO0gi8GH~qԙ4R}w a#z֮ԋW7K%um<֋xd/Bb}#^~8 SǟMO:"rG€8S CsxmtC|AaGLbJs/ e?ј?W_?5 eCƌ0ϧA>g $$A_P?R}vw#g 5DVa;e+=!9HoLSh ]ri詟05]%8J?>O28<388p‘zwo[R}YG϶.cs&8[ !y`.ǁœ[!#iOJ9OJJ(%o9$'!<>'UIǙTLIbÃWrL璮\gׇj80&;YYNmaPo%)\Fw۱g7i"ÄzpNCgq';s@@˽0%]ץ gmӇ*hc/u >>}7ͻ}&[:^a pJA?$v; }̻~E{iZ|z[vG0CpdGihһ)SO50 7:@Y^. fʑ(F@y8 IՅp8g 3=ɝMݕw\-tυ8W~]]|D>!BBk ^j.7\j hIJ (# 7xA 'n+FW kѕ"+$t*J"qX?ZPʺZK"`]-dJe*-Ujzli+Sej.@EWFt]ehڢ~ח+F'2\hC(]WFj+ʀѕJ5ѕ ue{MWx]\swv~rտ|V;>_v:_WN.wՋy~fX_=-L-ﮃ1JwFUBn'vn|urӛw>dի_b_ٷ?"%ɮqI?HғzֳC<;cޏՍq;2vQG/'X^~Fo'Lt?|Aj|q?7Kӹ ` 3EǷv{XSxj[}ė#F( [x6#*ҕFW]m,~6nI6HSM{ U+FWb^RJױE]%J)J$1=$]s<]WJ){9MW +XYh˟ -JSSt7]2ߌ2Q+>\j "\+ .NutRao@WtخH"])0~%2ܵ, TjJ2/+EEWFdpeMWԕׯzHW ]FW]mѕQ&lڠ}u!DD4Ry^0]z%RW~uq>t|Cˎ@G 8|O.l q<(w4:0`EqH3?aJ=I=&JyujA}̆ mdE]\;{ur-o^z?׏SvWB_PxbWoN^+ О9W>z(=7-`hR\>'_ miu욮Ȏ0T+Nѕ2ZtR~~:'MWUbXU+]ZG*QJ٢S,R`vmK+.jh2`KۢD@vtѕjte¥J)MWOЕrꄒ_?Gyړ%ʓE^.]-–e]K\!wn7,A{|sejW__v,_v#_ \)BGs)Ø$?dHmGRҩS1.N#`쩘ʟp53Xh>;SJr68; >:WSfGFW꩘t]%ruHER`}A/׾}oRWP|MJvEFW])-ueE]XtRbkҕKgpD(/C2J MWԕbMP +Gԕb5kWF[RQ6rtv= GWK=[l)."+je*mg  T+X *բ+%)]WFj |ER`q\ ]*SFtA]蹪ʀ׽j1nFWF+t])8nڠ8Weqc2 ߂܂?[%Yڳֳ喰Z @=pFp;OA-hwLٸgpm*~6nކ䄰"]Cj:6?7ʶױI]Ut5-ꧠjte+^WFmcJSE2`JqZteޗ+ܛ`7]mGWҕs=Rܸveϝt]%u%v|HW5\&Di(4]}1Os/[,k'-)ek.L] tMWz+TX(V+ÍTV|j҇I&]0ף+\EWFt]eMW9ɠT+Mk1_FYxOוQtE]YdՔ2[0xf׮42[ˉr(ʣݞV\=)'3{SNR7+ 8b5!R=%F[~l-@n1D.2QE2ѕr5%F+oZA1 U+(EqZte}ue6+ $J#Գk]g%cu%tTKWvK.P&-ѕ?g,րpe]-å/-[FWxSY t囮X]=2ܵ+, t]% 1y JC52܀hٗ+ UxeNQŕrX _[ 1Y1ax`YQwQ?-j-P.dgS1Iv %&ǐ6Ki?BRhQu`$h9Zi|fI~~^$x%VX<rzc($_nȼfcY;ʹWOҕ=Ω#$ϼ5?|O^Ji!Hy\Y?1<3üb^-eeͲ| &3y,m^F f3ߘp~b5M,'xCJBQxZ5^aAh>zw4] 5TjmHj:I񸘔E) s9K ~f xŬO^-4_j::GFAŻeOHt]1_hq7rF"/!K3ΘuƓDpkV%ǒCށA=Wɧ"\NSp*˨srK%2|BO h&ߒ3 ςP,(I zJpn#T= S2[aqq-*8 `4 +5z8E,LRƒP> q.s5B\<|=pr9n c)f"wd )UR*NϩUUNT |,^–ZBWuNqlZ_ C< )Z:Q;\?ce9Hi2\%?~w2<8#_oҚh`|W\{+zH1Dx=R#T^c,Y $~FyuA%H,O,)F$.>̔8uuDE+eyfeH hoŻior<4S>:qvjHcZsH[N%"Yb4"Yd1cR=:72KC_N2eDp?/V}5-.htZ 8T6Jb>c87-v?{dDKJki gդq '6;ʔ,ФaqXDI)A@2Us T&D(+yńT#i5{59 nj5X:bZNWAv=Db#comLBУ:"g݃Ev` Y6/Fq>p@o~" `/Y鮐9D֛DI9 ]cKPK0kE4R|2=OVi#{G=O`]=j 64QI$Ξ'wW(0@yqtkp})}m#Y@G\Ǽj nO +ies{~" 5X}Xְ:G\/~//V+{B*Jo)L/WY-z]RM`MT $9\,B431ʣDѝ8LGth 6|:kKŔ@g]7 *B43N^=C*8;8MA? p-ϩ&I-f`ךq?kƂ\mq`z+y=жM3/Ch9 )q"-B :$H(OvcT/VoX${d]Lo#N!,g"b!8'D%TL\1Iل]8 ! {y9;߿XA+c̏)Y)Y~[(+?rgvdZm`Q.Fg3bh&q8kkq:tpkcVZ,syͲ ]9UN2T,GEVwc *XUv _+nl5Xb&fi@iQ'k{iբ࿥Ym9\~j{ا?00lh: f@Hm'}E44$xa/0~\vKܶTƶ4e*ݟ8_oH1A2X*1"cLi'S!i OPCp/+CYN™L$h e+%˒\Re$a G;7{`cu`upxd}뽞E;s oQ<7bkXy9Ջ6w;UǶj1jLaP,yGLsH͓$~yvTeo:b[Q=x[_ք,c;J %dQrb}裨߸pɋlɶ%ht4.pKNm k`frɶ`V?8k feʅm#f2fm%fu,ܖ{>ƎUx%;p磐)G7`.ik0chk^|ܞ6aVQn#f2]`JX1myՏFX$V2h(6{w'ζ`V*9\qX?Uuh#RjYG%uyvbVNJ9m7(7:YfuYĩmı~SN1+q*^hilk0jD[0jF;FJX#qE}GŭAzh67/Dm2|fp۠#$AY$DޥC̈́RT}_U\Re&PsO?6^ -_m5( [KR4ĵr2vuXZ:n+7js:3F>V ~]yF\ SdBW) #LS0fn\T]W.dS}M5%J=G]ū8'J*Yy克M쫢kmЎV(L?5zgJ3C3@-?t<\'`N6uFO`>j&yԕ K|n̙jt/f>uhZjRSIXR~ɺfguc7!}(nxΗi˜Kd63-䣾zᶤ= mL;V[E~ ǓFǢͻZIaś=]+vE辔^O.SSI UBcQBc(b |RSp-(Q)8|g#c ;V8T3` eг^7y`1 :&4' )8ıBɌ&kvH#NMH!hMHu$@~#n]wV(<25VT:V%X!M%UlS5(VL)èCRWN=]DUAû߼ ^th6{Ecs".WFlh7{quWHs2'F%]n nWxʄ!NtBw7ޔi #luYifgI16r2f%\,ǓZU4:^?;Rwi9vu" ydnk5qc'lL-c 숰ѱM)^Rf˼7;9%NjeՉخNFcV6@l]J;PTl .? n`J薉6b;?=-E?~+QwјO uΨ:S\YcͺeF#]k.Y΄:p6MeԖ#,85wSJc۟n6hSBZԔwvF^Rвit u/>Y{ c&7ofu(.9ӆC;/|!c54i%f^nDj?[ڈ+(cG->o;=P&U8ϨB(",991"BFJi+L<׹" O%vOI3xVF]%3BvP[v|!}Cc"J60hjNJxa/5U=qzJ6s/!SVfkH F#nfYdQ~>T(5]?/pՆi/^ kfEuH'{Xgf? eԖ B<\Ł`5 cJ0{Ey%.7gt6{I;Dh0(NT4'BD pX:Yr¤ '"|O|OEtɴs+TÆEHjC;CKh6-q .3)*ٵʃ:h&N4{I1(ОǢE)q8\&TĴ-ٴy#k@ǵShY>kֱư@s%x150)VkДL JqhNH@q]!(7n 8{[IZXѪ`xZnG&s@$p>>/֯??)ެĶo_OᤪuU BjL@HQՅ6Z|ZhRHˇaw`=[y*p]˲Т @YItYڞU8gVHs:F(d!w7 Knv1vؙ{ߝX(=gK67X65yc3i`%-Ha5&K! ĺ)zU!\(X~]\+\YKZ`T=I3\/Z^Nnݫ@}t@)?H*}vz&z/GA|xWڑ[nщ@*~iq#{VZ?$ a3KGL:Wl3yI}aSwWW^J{5A"TDs;\Co?tR%FOՂx:WQȭ۸߿prg*&q"k2OeYxJ,wqacuGL^kpxa-/fW,oDN.r'{* ݩ-`|M0wG@pDa*{m]kZ:?iBqvPW nо1N3\XGؐRw@w} "_6K?1;יִKLeO a!4Vu(ܿA;)%!M|EmX]vC@KE{{84 |ˊ330iU{puWu! ڧ_"Rw ڵ<Ӯ2E["WRgCx !)pH`Z ;|h=D-^G''l|&Dn$BR܋Z$iґ.w%AT`$d<]߭+)L*`uJmjv|? Th$D)*u- 1OXb\h?NC (9IţVUkG62Jf4g?U*r/j[sGJ4-oYF/L)83 rsH ӌ.CQ]5hJ ԬhRU+uOe:>KWQg*m@Z K'pӣUqV< h'B_ 8[bLNԉHȳ鳕15C^>^NZTҞ.XG.wYKeo `APhcgU:w5ZT%MNS̨q_6T f$lhJ%qp"TIz`~-b5Oq Ocz}Yjbu.=HĻ'3s R V9f˰8bWdL{qDb ,D%Pk¢dc;:ʳϒ"9gKPs _oK Ԏ5nXғk=URAZCT,] 8)>'Y88ě,sqԑzup!ؕxơ.,H0JݿHo'`_.EOqDXx-&B7N٪X,;+EN3u-rC_VzצYz#$Kic_ 9cӫi\#SzvMArBt!&2hi 9Y8Xm/€LRyW=*O~|(sA[ߜe<^/(Bkܦ$X\l/8rn ?zg \vā2,6tI|O{ 9,1vM RSebw(.5ЍY\QPԊYK ڿVNr Y+T}FW]7?]#|P hj_4a׼0+ X hrB]qm\W K&͸ש+aEijixjyϞ&ŶjŜ mKW.ebode[Ѽ W댮0 c_OhE%4|P3弥/bƑ3_+%9/K\kr˙F;0N;q9q|4/BB!b+c;δh <{B=8WlN[(2ch$`c?D'!s 1<=< }%]H(d}Ƿ7J[{F>qhHM|Ŏl4d rڌi%;cp,Eu;!-;N႓t TNeY߁*$;Kq2O[2DW"nǹl_w$9ncF;Ǽ͵ 8%4Z>GF$ '>a[W0)K=yK3=j]f_MlA{W{26s9.SV9^E rc>^Ɔs,q9xp7ujaf 6})^Z-Aۧ4/v~mB 㑷UAIY !P 4ӆP{XNJWu6JBqTl7X%_!e)K>@bqQ`nVTE0-(\eqR>(ĢyRu.H!(_i/j=bVA|q"}`Zn!P&Eqv'ZQݽz V2']D&;B]vR!4쏳=snxɩB`C`BcePǺb"@KkD wKȿ%-ANiA0Dqfov1lւXr*ao9!] 5='LdL*N8 ,q?QAg(Tm*U*IH< `R6`5{ea(+P 8r) e2IME,ߦ2: 9;2S7 w!*YLx4S|S6FikJ A#͏uҶ;|޶+cB1@$'B1婳ycjHծ??7"|QQ4A5VPրQj,*o5lx<ڏҠ)kZ>{$Ϧa;^5Θ׮w}'\h1@}AD JX ڕ=r!Tp'8]T+oX ?YUkd5>/8"JL¥P)K%L+@J^"?Bpa@lo6o9~t٣T_[ia(5OĎ1 ^@$66k p8QNxbܾ.",5PT:ΘJY)mZ%PB)˲<^"|XS3%€,9RnbVN晦_o?y!رHI3isb'w-Ek 9L7;qOLST5X<2(RSrO)6M* !DX36okJ4-oYd+n{ӰAY^qeSĪTD%X;Br>' oB3WqcE{AI^Rw3ⵘ meK]{qේo3DU(,WT"ad@-*dIkQb\ɷc9>+wIם04 Rc͆0  *MrBt!&wc&FNqtq]k-߿5#0k3RceĜzg" ˰8MM)c"=7tŢAIH1N45TCqKCGBjY&f>CBcBE3f6mTѶJIj]jdzK?T_&ԚRfQEAQ[ O ȩs1Q@$Xhbwh4`zy[QyBS7rB5ANLWe(3_ 5!W­sh4ߣ7-ֱ09Ij|Z,5P'8uSeM~VI͸!!֛IZX1݁s#>Ψ*qtq^"i6)'4=?&d *'QB6K ;®$Qt0V҈(C^2e*IʚYI_C >0jaҤn('|=fbja?U-ܖe 3tȅ15=#'cJKpfV W9&7H%AJ2{,D;Nas&_)lv 0NcW2((>yQjF_(ig؏z7ÐN90FrSD"Z@q:Q?~2 A !cHXxßu08tEQNcC(!U͙T $rzkcԲ*tp -1tJ$b.'6_BPF:a{OJ(#$6% HI#.*0L0Ae*!0f ^$~1s%]xuu9Y* ]NUSL15hzIߎ%jKJ&ʒwa6+K> GSB*G^@M>QӠ vT?i_Eæ挗€z-iD/F7Y覄5^gBQ&vg왣Z6C^q)T?Q[у$]=u馏Y[-NꀐA{!qC/ 8sϘ+iv5kOc<w 䈶)@X@XoJ^ï3-hR//[ϻUH.:Q!K@aAmw٧_kA6šFǍHQ:U8Dɶ 7s6#-5sբ"&A]sHgop Řv? 9ZfVLCEJe,]7ƚm>(*+ngCI5nWz;Q咫,AlA\^mGvʱn7nIQY٘גgFb5FFVwbPcV?!RaC`KdYX"РT0Ŏ5_n*_uܚuFsd^p!d,؈Ӫ%ڎen1u(b*nF$Se®#9 ݣuy/u IV[u=ecR}W8|GD5k+Ѿ]@-tGmӧ A;*O@ͬζW}-L̼Wo9~.uB;|mktb 6wA_ƻ> #3f"طI[pYbId.*׻<1@|,1ΫHJ|yGt.Ebxx9䚉 Al,u?r `7VJ^z!xJ) "ZKAoTG5GX "(Ab[mT&WTʩ=3(D)G*T+zDžBrmt$+ESrR! $2]=$vӝQɹFKoq,Tv?.FZp|ە̺# ª*v;{j"TZ.'&ofZjUu`JgvF3aH vV.$TJk:,T6;qYƲX/7M55+NѮX=&_6u>dAtubh]7Ӫ߱C㽙 Md;\=hDI Z|zPp 1& k>*Gvx-5'3DCAwT~.M2=VJY6BD H lRC5RAY!ki$]s,8Y/| r6Ο|{o?$C`юB*J<m'̓SLiL <$O %y)PPhx`!H̜Q^a1 'G}[E*Vn1C?`8JHjL?|g7s@gZb̍Oe^r _x^CSbI -&B(Q`cEpZO4pf *D0f|j(64c'ڋ[Y^O{ 18=7^3ZWà |⇟%2v `8aܖ}[Ej\+3V YyycHfAAuоk|?,+~a_* @̘{w?I^Rj5Ͽ|֋+kh &z Pည^9)`% dA$,0lx4 )n [O*;qgl ܙ+`kE:Tx|dDM1X-\dΞUFw~ Oz}̠ 1P@r0b <=&_n>T+5+y??8Bӿ-ːxRy?t"ݯ?`Y~l"9q|+v 8A z/"}D %͞S({by,*q9Uxt{A\d**?ʵP["G SXs)Y<ݟuӔ::(.+1bDǸ$ExBӳ([s%(A=y̡V!99)]Їo [PVD>2<ܕw/@LwAgF>2wz?UYgrR ,N@.'qv&Ot{:6.LOı2^a sj߆xS?l? A ,r20[=]UHhk5WuyqQ0O  m`Y)Y3}7ycb`m)vsfbZ1v_JAus3ag%Vp`KFEL>ʧhCb&I4\SޛUhAUi|W~=P$H +Ў#[bUg#+-tEu!Vr,5zZuE=VI[i!Ә'q|zcfF jlXafc$ET;l);54p:^3xT;Xw{Nk"DkQ5jo;\P΁, K_(N(ʱ(k,9YI'Z%D,>y0hw2LYt 5p.*5ҝ~^|KIH=(,OЉ8S)*ayY̰Wqo;gi#ꗔ(伟±+.>B1Jaӿ@'6->koFSp:'N֔LyCƂ:s8=RN#? }E8j%LL._2_Q~Һ`z7MLML61NEhD2JL{8X5AJTYnue8SP؏;^$Nj5jh]u/-̌\dj? _km}(nkڒT3¤ά aXX2ư<Di/ p8"aP$H<Q5#LDJi$FyDJ6JA$/KnW<\rJM(Cp-fGWg2ol QrQsnQ,r)B#" BJNb  %`SsroxF}D(%An^W&H I*&YgՔ:"΍ Tj2f$LRHb3f .bA0IO%& q٬hT/xLNr$&Sa@wk΃ס1rT+# v{[`f.qkʃGEIŕr)80ABV)MIKƪ&D#BU[o+ tC/z1&7\RxQW<(ނ9wLp YLglMi$^l=dKY݃28Ǭ5EIQo[^xظbH*x@mOnq>tָ{fqoo]Q o&i*@CN0C^e M GB&:MƘT6pSiQCy//ӼZ cy-خYV<^ ⾨'<$Dtc1~,3܂=&3XiDGFXe)XrHFX4vX!LfiCp_#Y6I.jd{CO;a7]|"n4#ͳW5'F;Zu{QqyOSKJlP+k]MccY}-ߺ4M7m[n]ͷSTYv׃7L0 wY邎;腥/L txX҉vD*n١}w\ڤ Rosk^Dv;GHb)Cs'Zcw>=P]}7BZ.hw_AB>glM D) }6KW0nr5L զu5nÄ]7^˾43[v FbN)B!j5P5FkzjN%a w״X]縨t9~|=ߊ½brH]P$ǫn-ծ(mRIFBWԭ‚3y^T8rsVj3Ԅy*y=uXa.dLRs!bP-V0hH!H->+nGphݏgi;l:(~da\YDQsHHI)jP7ݤS*SF|/>|g$6Mn؏1GHFO F&ZQe:mU>LX}.#.oQ;j1ƇѠ oM/kd//mem+&='"#(\rOȨq_7 Hٹ-JPmr>C3`5x 6T*#mcpn2>#%Q?/g&B$ecLgckjD4FbqW/Y8(KAY",FZPz`er$”=q &J T%>*psR\3PrO#)-!=^ʎ\}q>hΦ=<<.4 .l@iG8تH,bOeTazje9Wu(z}y q^ nn sB7 Oi޿Ƕjcj9msM]B/Fg.1̻L#"[~[KNbO[vI\{M׳ٚlKM2LUir(oFƠ;صrm1QQs%^GϓƬGd6C&ժoLcayXi`h/B}d88읙?w^Ew52>bf}Vny)fj" 10ұN.^Vajdludc/>*, 9)ncɝLPRsz ZZwj>g[qAo:dתC&Qn\e$TV~ 4ݮ]1-B]/W_-=9'Xҟq*#Wl8QW68Qjn'~K*D=ͨnYO[BEkmEx?O_Vm#^n"9>ϫ'O=9+"QdX14&dv}͢P]b~"TWڷ#/Kz{չ|8bc1:ReB-.؅!&UL2b8|M0 wԥ=2x tdXG zh!H@\:Ss鹛8DK ߮/#Dqiƈb/R<̾}.C9!` >1h3~^RJtꋧ1EbDSBgN!RaAkeG`填#}sV~30EWs$ e#]P(;EszBq$Qk=4r4~zPYZBҔ!DdbF=n/Gt;a>޼Wm%$ 2_f{C\97=`3^AƷ!ղ ,6pb%Me()eq2k2Pq,dnJ*q~]^P#(4lA3IHG>KekĆ/ k3,Seh(s4]B$Y oD!Y;;)J+#ej󎣊vgq mSzYq 7s.H&50e$Ήa*_r-sH W Q_8/|V瀪5hgj(1"#tL,B'H¯46# Z[rO4i`闅;w n޲Cqv9U#O..ߞ$ G8epg,Qdޜ mx7!}`Y[ uF➎:/A쵻pm{+:2#K03o>,")ixddRJ@S W &E^?ɤ&豘'/ F=E3ȕ`xhJcve F7fYl瘠(gb8,(D o 9DS3a|C3%  I2`8KZ_-' uKv+en ySqЇj)F`-ҝӒ5a˙JXr8#g8GKvDq|%s?tlLJ;Zv=F =ef^NAA{$+5?" %y*" R#õk]˯ EMi!Q^m52L?xfW=NPg1ΡzyClcb9bQİ2ZJ0F6+(J|ՙPbD$AӢ6=VnB6StƝfsOf\--{joBn^4M> o2;'|ͽZ2ga/\Hhk4qƵ8 H b^{]~ kiz=J'sUD_##SN|!eoUyLw!s< @^rcjt)h6\3SdŌ~<^'?t9 ׋_\^m(>UA;O`Ꮢ*G]ITD*KPJ:S/&zʏG@D!=%bqB*chx疦Wzrk`"1x(*t;3 .R#CvHnzB<&S<̈́Q" l(4u\NDzL0QNM 9p\|}Y5mU|E줴f INZz%`^DU+{5 "Arq7 1Λ]^')E^sQ 0^󌨢Wdϗ*dʧF:cٔyc9n"Sٗ$|Jjd PKu6lG*A(hL/][;[k"$/˲F@MKͮ_LB[td52FFzv^_\~XDT+}y~ #bT1Dz_dHϔw/w2f߷]˧ tY?U%iE9_id"#jlUVdLJm#0͵F8ISI|=|`Ƥ`>-gt6DLQyW$S&|OJRXɿ%{)lF߰x~j4Zr}yw^|?u']уv+]vovɗ|IZj̰`0־O3Χ}$hG{TX1m"CGy9G c;96Bֳ-#wyv W#ݫsdxݫ:6q[vh0tO'Y}>p:eGnH !3Bq*>)Q_uU(X'.>\TKzYM!mC=cHOSFhSѥX 8h!ہCG7X?ZnUW4㜦ibt$S L-N`rȝڿvBu\㔒~W8$eOӇǑ$Aq>8n6'p$˒?]Y#t[&-{:#tҀ'ŝEuGNix!3#gt;AcIY_~Huխ3+rmk}]W#m#cXgMкCjXv; Hx:  .(_mrY$+y> !帯Ʒ!w=jzjd w(YOج9H,qA+DY{S%ٻFX^ zQX$`-c hTKjBÜ=n9Fxnq)栖t3iqiY읥kᄑ#Ud.8ޠ洨H*Ryi-v XsVn#(ZE>@N{;'[:Yn.r"wjK.7EGN ];v{]ՠ7`vuOt2զz[ "wSt[vnʠ*;]~ٶ`|v+ʜSiĠ)ju63]wt'j[P՛R|躳Lθō׍^Ƀmő&sc$^CE Z}Eܠ5}Q[綠!C n@ˤO.a==WW~Rmvx>|D&'5;kXsY0DP#ڟ_[!*6>kEInm<%khQk}&cނ˛H[ 2h`fj1(!Hi]{ '䡙]nˁ/6)!cPp c>(aOd10{k;l]9r+g'q8=_!cԣ  88g}N- }VS͠iG}g2۶i㍥" kwx $n=ϨX(2ID/OJ()kmPI ]a~Or_ֽwryIGQeTxea}3B0BZ?D@%/;UaL۞x{qrDl\zMp&pƾu`rbZO_ǛNO96EKˑ* #-E At:q'8B)@ӳ˪uRBk8x\t њ(*8i{[E_~ZL+i?ep8mh}&&m}3yE.-C=h8C6} tx<)M`b`C_Pp59b#Q1'#X Ep( ǗսSBeh>!@ 3H7r1`bnfpmxO629l0uw_k} )&K4g_-k"2,Ng_8-_~:.~7>e&G_iY|/xqex~o Tɽ`M;5כL#v+#"FVF`.w Ш1!]69R/AiM,Gdv`HYFj( um%Y!&*Upa ?; d֗{ \aL5FbA[%?XF;82|c%LO/rM[q|ϩO+ t_s=#d'3e*d(Z8,g6\}ysc:"LB0N(N?W%%:nR+X}wx kXD鳑 I$%/{W8??Q{N C6:c"·?Amr{$''<2錉&`,HbRk\RP0X`RTDlCAw9&pK 2uIq5%kW{It!,QQF)Qƴg1)Ca`} 2GLrxfdW!R7Fթ|( i~aX`?Ө VlPz9pD|6~ 4|ą`1F+?v%1_w^OdsO&! Bć\-lo$"Q5 CF ?93_7zd /J0}9\%!s=-aV)?3Y0Z4춳8@|kQR4[̰"7XzTμӓp^`t5?{>'Pc@Ğ ba!ycskQBn5b^x#l;TO!yG{?/gGeBDY-i~[?p|.EЀB uY$lNtw"J(iXnNq ^O55%y6܁y#N ؛L"RG0T$HW`& fbdj0WM{ͥ y/FoXRӋOXy0SkH$`aֺ:I-!Ll0Mr$'$9'Xh@OZ37pccL9M"6FGJ R:f9,&B6;7S: ?5+|C>)8 $un)F bA-}spu>ÐTadkQ- P3ͺ_>2 r 7COfڬ;^%0ˁ UT\CNMjc"es'9y9,#`p8{J\:yὓ"zC%VrSH S 6)&Q ~oWǘ*zgIhSԉx,D9Ke. .%%9"r#R ԍh+ V'O" cMp넾_um. hldi3|2D"$=D` #!,wH䌅D4QH H Ԝ1uFFnlN[(UP :BcY+*:[MJ@ ؉~=v^ҜA>dTyEĞ_cnI) dEKNJʰb6wR# G:@3YrP\L#5$‍6y Fg@UIO lGZ=W>``)c`MrAh~Wۨ5htsIYRѬ`ԏj*Ym~opcf9Pv:HeR)Ÿsw@PĈ')cÔ,ECp)’VW#rVSWwcHbM,E;E'rhPOZ#mn”e9E~rSN G^`&0BL-+~Sd a eJ(W7M>TS ^2IMj ztI.VX%,Zbul)p}4}-]CcB+9V/- +\r\lUD̤n6vy}A354vi=FWu;IIZz\qA¤H@rg*9l_TCڀn1!Pu\ʅ^rBO jӧktM%4x~t͙ 8rfkzp:֧kkm)߶Rλ7\ 0I0I<R$Rj]ihν^ccKlޠ᠖t))q7U{[jꢌ^td;!:-1 }TG4]'5xGpIyGS} jR)R]@wn3| F)Kf vJ)J"FHy8p̪:V@XwNuMIA6vyy>HEoJC F1}W TW v5R#Nl#`݇䪖Bb&PȾ z:1rĜQ<]#ժ>ɞL]]km#W>,RE iw39<90xȗDr-ʒ%K-=݈MYUGVWx >ΫQ?j }h̔_&~_f?eZyO9oq/6/knu_u ,V,E/7wӟ#)O?or~j&_o./}Zn 7Nipq.G<㣯 ^s!ku-kD+P.oY\/cK? yma?(H1@0[;PA gc.cw"s'2DUm=dL"F"Ti\iNZ.2t݁a˯*V!%SU-vyHVryze 寥-<r *U^u'hK@[e2bmJ]a٧NK"g44[iyP>u#mʕ7;'1|W\tdrU W@XQF)kqHq MJL2Ƀ 339dY=5VJAEA墽K)/G%egc1S |ڰx jգR1Hl}"`'S@ua!G G? e6ǢTX:rw#&; g}$#bPlG /Ax$d58hy(rw_Nt4s+Mӟ/4WS;n[7+ѯsUս5~]Lo&?&۷Ja+A0oUyMjKw/P\͘խ*U]/!;*Rf2-hYfA8| [39lgoF<9~.>^Oپk$ko&RmrYr%nߚ28՚`8|,Ҡ<[YNGJz2"Tsm55?_mNє|MVoƣs6?EaWl@D.iKt.%] }>h2iQQT_YMQFCH.[YSsуW:;P2h~[~wJ6N~NHhi^>*ӻpo=$lqӯ瓳a8Olxc_3U@ YG'6ÉfGICQSD"Za+GDɲ[ЅS.DCRUQ.dTE*I%WQ$]j A~_05Ic(R2 k<@Za=WL6(]SWj±B L϶d-2@3Mfan =ި{wokq6nq?-^nY gFy7N9c6ߟި?$}s"I@kQp"q2JQBl=߹& ڤh3R:6mk0,s }̺qF?FYS2P<5Vԃ(K LTYguϒqC_#l]áhR*#"&Y{^fOV(<ޡR(>|S:ki:`TE֔'癰PtC仔eX,6eIU2PcN:86զ%1UU|%TK PR+-3tYv> )_"2pŗS 1O}y:|<鮿/& 1O^~Zz="j61'J*V1%ZKn^K1Ji2X~eeLI1Im`}K/aЖXG+e-,53Ā"lkLaZV 5ѹ>FIw S0o4+PxQ N Io7y~4,L,Ph~=K! !TE(E#C+}`;'7:l#[h}b!\6#4E]ob1"\qhFr'q|==sFoZE:^̳/i@w%l|7y1 x3t츦uFz,&nl(Ooo4k4{D9?:iD8ZtbЎQj $a]5kM\b^0K!>%:L- e( FhPf!kP ǭ?%}7hɋYb$1QW̺-/ު=2#gĦ,e`Ǫ,#rh74u7b??x 5g#6J\KR{X!=Flrk3޳Y @&雧{k=oflfqwM^#4ihg53lX`Z1{,]T[ ʫH?;iZC0A0iHF\𘕢vrɖCh-]~j-J+KI(r>7WWܐ?XF|۵}+5~VFq/YŚ+aUάm}Tuӷ<.Ŵu$lM?iwJ؃~?bq-ضosʳE[TY@튷R@u$@ pt* 7J [:c8dA %g\]l]:ĕj*&VbX ( *5`k#U[Ui  *i\|{k%i5&]$IEE[P[ #CcwdV¡ژVdtȘBU5_(/2 #PY|d;1RurHW0>9.8fjR1sFCO?|ĢlGK&im G[2_|yoUʾԚqWMDğJ#L^p2 V>$ԉ=AhS4Yi(EˢM -Rj'3朌,5, J(_Y:eml H2Tjbk'"6b#DŨ؞B]:~ABE[F4*g#䲕[y\rdW:;PUԆPIؾBa xVYvaA+gRwkvl(0TI<+.bPq6oR"xg(N6ַ#ZM^HO!^im;RjPK\ơtxExk&o/ORB[řr6$#NK 촳gnCͥj>o `~@vr(ewV;)_rT7&v1mK=[z65kF iҼte2fʃZ8_N]"0FAL۩fȬ( 0+gt쎛cȘyCHl+ZdсVFg/G'޵,_qfz3^f",m]lWT(Rx)ٖn*yH$pN <:d\߳5Rd0oeCƹ\شɘZPL)68 HT2#"Άڼ^Db0 A? X0+:hZ+ }nW{YX9nxxqLюHÒ|c}˲lh_ [x٩u{%HВJ x&_:vG-2팵,({?v'g4LJ\#u坯kʁg9AVM{Y$j:|V0X4bk(2rS&WR TxOc n>dž0dk ?ߗ˩\x$@/O?^s_@aRv`?ChHJMhŽ7CJw$wLZIdž Ē;Qv+^}q>˅^cWP-7^eV{U&ٗጾmuNluϕi* ]}×bE=Vx<ϿorkitΤOMZO*>۴co ҙ`YRL&Գ/0lC⒩ʙAk'{tg DRJ SLU:SS{%~.U xg|4E_X & , T09s!DUhlX=9]]=y 䡗~61Û.k0C ВFK:a߀yȻ%052c*I$Q8z~΀GD&Oy􍰮gյf^?:kruQTM0%I17D.JYfrE\kyɔ,c(S_T-rLrx}úso4+;+vY%_>%f0B`Us q]0sV1 eNmoZVF!~n;M9r&Z'燳~dRHMH%Txm2 (pS$o&Mf-7[dc cpLcgTƽqSntRK9ֹحb"eQ|МrBw4tL0d^Ї qfm)]o]ps *: Y9gÃۓ)n/HXnWǵ!OEFSYIչp2nM["9x`QDYԤbhbC+&b'Cȶ9A3Jns,3h->gdD%?n3? Q`+}0x]H=idjVC<تKB wQ0]yl1k[8Kfc.f=H/zٲ5ڊ/&W;z<= S&fBbz(ҫR e:gcG!>^ fJԭu &i0Tc)0I*l*#KM{ݩLgDuD&'8hL ?RVQ?r@9|5iDOURR28t0Իmȕ({ɡq\IHFK6csQa z|O7$k ZO4ւS]b${aO@kX6{,ap'5w:. MD0'+Y W"PTQ)i >z b5W/waqD n̲9.8C·`uIx:x,cYa6:4F%yIxyIv[ZBµq5>8#{ALk>0 xs|c;~{gϗ}vM~&'  j Iz/0Xm5%QxIπUnl豤h,4g`4zQ!4G;`p9 佹?| \sԇX`+k\DTw*rR] y:Q`S-bR7k4( 3,1J捨Jv1X.ضR>V{o%kDd"V/>M|9})ՏNW{k!FuTu"^p".kTecᅴs!- X J0Kݼn^Pn+@!nTZ0> ro}>1wzEkbH׳oJaF3M \l $Q9u/bƼbfP:5Hhk곯1Q|d&cY#ۅ9#[@吧:Hh u?սQǕ o^S<,~v8 g J_aqcZ6*%b[DXd#:ُJL9yc5'窹Zb#empfi9Yն[Zk)%5d๹%Np>qrK%,{3d}L89ޜ>Ia4=:/(xYk \O>y65&I"*6S=ml/5$  \k1G-rB3 \';g[=$"8N0@t Xd4Œ6[r+|`R7IB!3V) PwkY&o޵#biȪИY`wI;xM(Dz;Iϟߢ$۲tdK: $cXX!χØȩ2D2z2ׄ)/$(#FJqSO6`&x9`Rj@,7pU >1u5EY#&ҟ8S>bLΚ̻/EXTchY^ʣ;޿dF! K&24VEEaa P'1!<6SKEFmՐgXfa{THBM5!{F/k5: ez#݋^@qGtxLiB1^G:}ѾWll?{/.9]k 9Rk?,ޫfR#t >`( Kb{JB;a+ED6(BC%Y;+ Y@-!c+-UO?DaqkBLwS+U=݂5XWC׹KҬƪ4pP۽ݓCz>zob'fl -r9`-cp(kB#Gih{nBf-i@KҼbEF- 9 $ Z1%iZg*[^g<=+2{" E_\jt[;gk,m,5lX + W[XgɺY꒐BK ъ8<ۿsyo SŠ#Z>-ۿ>4ɛ4d^1[5C4Ӵ>cm\33a}q< 'uՕWa ̇[b\ PKjf5Po21cGg#KWX.;-Ts8u5wW~̲>}i|ݘ7xM^M͔|g*~fm&qk{qg$ܳ@;&A-mC]5U)(^ >:7d?j>7DNMLMNAdqnΝ*4hmյDv7ŜV[ZIۆo6 =qdXX(nY#6ikޫ􎾙sˀa`\×PԖQb݄3E"QZUu*zNa$^RFA{M"r&i@V^nr Xc++ʍJ_(_>} ƘdL gI3L!yo`dRb%UQ!ot%SHvk (1#BL|h fmo0ʢ]uښۯ{%DȆDD~VX!RUb.!>\飐VNG)vU d9a7يRËfiZ"Q*MY6}AMl XA,Jj- *R aa7K5(ڊa>,^qwŻN}0vfAll8 C]b努hZJVxIYyG1̙U!y01D 3RJ|T,"bWD|y~Gqb1뚾?HxUb't1JTYSuZZfhy;JV|Ԓ=X׋_ߪ"=XVnu!ϤH4J=DjV=#ikBF (UX"XZ&56Y,#P6;Y<Ё:'^ܩNzg9+4ݢ:/<i4+Oӣ%eƪ %l:vosԮ:eilsCg{9ˡ6kB.?@L ,*ͶknzXd=~f|0'oDZ >AFb H9\V-V8mZ3xCK+DhjeϦXNgX۲ jF#1x;h2=wGЭ8h3HMMM=55$B5 P#P 7Zi6Uy%T"ޅqoW5XfW ^PF4zSvc-n:FJtTë 73WէWyu XW\C<NKjëE =hۡES PղFzYIV`A]):~wt;9zɟBɆՋ43QP PW .c7KY|rAY`L{>wskb._u앁]u ӣIRBhtv׶1~ȑu5|<1dtJ2u+ޱ$I $Rqe[w,w|p|f,=;hF Xϣ%2b*h:r` hޥ]EYkgnJj{֬wG|m{; 럥\PM!U7>y-|~4I<*TR_^ggp8c8o>KTyyP1d#'pP\0JkrcoC?NeYZՠ).y'6˞:pD*\vӰ$$J.NI|J1M5sIlLTǥd۟zH~EWtU&?,f]ku|1}_y?T{r|{,?I\g~;ݥiz//ߜ}̓(UX9]Te̩:Rt<9w<ӓzTV̍ Wj_q Qha|:|U>`sץeĞ nd眰 ^*b@u~?_~b>httui`ǣ*>}pp0hs+ Zk_:]a־8NLSrLp5bXԐQ`"PZmAsF ?0^%{DEj)Z>@ ,t2".2& RR(1X[c8[ٌN8lnf`t.:J{zVу-%j za;0ĻDoaԽK-M=g>G= S`f{ScKDno蟏_ zK흄O{pouKߍ_Rs.cO.}zO1߅9smgެXơ~uq:*3?_@|%&8dtY9.?_teHRԼ!5oCjfgG'vg6Qm#_2ڪ۩:e';/H`w^ jšd;yȿo/bjlD []U: T kZoRU5m1?-l_^ZrwϥZw[n W>rEk2ȑi%N!Uv!WdSpp2 769Yq#9#ǻ+f12[c.zx] {yg;aNI^11'ɲxrԵkc]'˷(A6̸{靣m<lL(JznQr:T:0D6BDk}c%׍4ބDjBKɻ_F -L1Q巨Ekx% :]*aHVr}a_ۛI+ qu5 yZ+y$d=$v Bw{!V%njYCl~A3wet-HlgOzUGhphU>?1G {t(#CL ={ON$-c '%dncIFjAfB2k=k azsJ"u6r^WK ]y]¨79"!hkC; ] ୙W,Cl~aSC$h&o[^Vv7ʻ>TH=kted8]5d=N / ^:%4R 1C+Kݟ;Sp|et I?(U4rEIBS NfCSiI-s҄~}NjFϳƏ[Z[ ,SHx ާ;Iw.w1ZsJrCl`>(EyHa[ۙ o(wwZmɸ&8e kAG L,8Z2iynN$Ձ|AsKIGdQQ1&&}X6O{ TɌ$h<`,b)#, bYF5D@hx ;Ҳy`i sɍ'E}f+e"SI $ˁ1IIH>i Li)xLoqizc y"Ey>&`9j@ IRzqdКc w&ޣD9߹O.%F<;/zG:A~aypɄ= |whhM=?>~zuwSLv+sw_7w˿s;/yOm-H2´D2}>m7S;gP|JH8R08MeՍ p HvIcNʣI)LY66YYKk'a̞wofzF#NUeȮ[P6ӉJV^hpp\w*SvJLٙr(rѨ]|ԞG=DH d09{:% #pj 2J9Xڅ.B79r^ U|JpHM<{8#]?n89 ° )"q}deD{J-yn(Frn(FڹC%f’U$BpNQQZc֫~$jjLhN|~hBCߘOFєMDLW|rv;qsϒYxof|qui/fK_d(ylͯdm1o0 ;dnAypklřg J2̖GHtK`\UfF) dYH P[.5_0vxRg8 <=xޞÍbtsi:0N~Ug155^gڃ<3I)b1}GYzH:S'a߂$AG%jD-RZ\5Ԯ)Q Gy0ʴm,NV[th/L",@tTD&6V&r9XXnAQ%s%Y\BZC><$E ÓaO85p˥L|g\Gl.49 \ƜοXr6XK h~,Pc25U QTs(Yđ-v NeJ*KudOԙ`,N)TE$J֥Pɪ.t;P*z !t[smJT-~o]C4g{4}{ę ًpy o/.(yèzCP6Ww^v'}9[t"o2Q8įw&^,t8?#BW3Nc0;1B4K孶<xD+Xz{@ ۍC̓ ~ikH1H<pGs4"Cqү^є0 Gm;8/&ƤL}|khơu2,onm,7O}ۊpB @3B0bJiX*m4"}5$Xfl\+9d#AW tCF 'DQEV_S5k>](в%lQ>>%Ю&Ν8U'G9P?K9\đ1"Ձy܍eJWX#D^==Gwx9b[3'l=Xv6a^2YMT}Ϻvϒa5?8IGδ~?]sp&(OƷI{G·󽫫ʵ Bͪm0t&XTDǭ6gk5Aϭ B\Rr:̅GPH4 9^~ΰY+X{,stKN=(Qb׋< w~ jfet8M&r|(Q㣶_j2IhMFgZ!8ƩvA(r%"dBuM&+ߘɡܔSu\egQ{FmMHPGA1&MK&x" ^FD0h$u մdk"7'YyPFǚ "4qNg+̿=ťVyBHҼv?èY%!%wԁ!i_o m1+moƿ^{x_}{6a졕3{0E8^$wf$LE!O$=SAZ^C"K*_\F(s3 h䖚(eF0shK0vOM9VR` ȏlQ“\x H=fOR6QMb/2P]^:CPmn qDd=>l{}/|˛hkxuTaZS8l(B:Jvp{~Fi j#jOW*p 6?dGjhx a4¢ĩ|mP1qܶ(aWrJ޼>ǩ7B@Qt1f2;$9 Pm =6tGNzB#C.m֬FHJn#W% Qj ܉w0ȁa(BKP \脻&le^I˕z^XmGt$Z-<-Jl*:J-XLQ\v\-TZW&U*5_>6#\wZ|}Fnmǭt$R JFxz33!2LKE.TvIYTj9VʚgZΠo\+Zc'B[v[fPJc#mc˿"ý)! IZn$mhc83ѫ' )K,EH=CbsgfC:&9IXE}$xHc5Ry.R\r\jyPcT6!շ{g,Mgj!URV8Mhk3JRPlG*K:14(vP Jبݗ٨acT6!ɷghɍe,!PX3(K,#H|<-PG0|t͑]b*XA0HR5]"E \氋Xb$.v6'  ߴ60p0dK,R`* cQ|7KnM@1M ,+XmNT5q Ww&"QIA,W^!Գ WD'|7m08,=?GPV:fi%LL⯈D HD\N.KBcd~RmXCIz0a8ֆz)`:lUWP uqOIVZs᰺HV3ʁ"wXHEH&zب<Vrz^'W-:iVhA*҂t9פ|w"Tq"+Wn\=gm88h/ར]0Nͥmo8p"xr[_-Tv+.1 VC /yٿJ_}v)s~7وfLnIEޑ4Lp@ ySPֿAQލJƚh_WS)lľ*qG&s*c%NX5`T? \eO`YZ+uf_B8sT8G6ft$PÙ]Tl]SzXה*.bT{rnw{S>CMЊ] vn< IB..N=:zNR'=YSNVN(C8ɮ"ёc9uq(+7 pTd3b7\`uƲev A<㻆5ȸI$K O׍U:KA]RS9=+]W ۾e~vYMz|jEaCRE:NH$4Qf*V|Ϻe?@@QS@No&_{MJO|@ hIEV Va^ F3DO==mf)RR'‚MJ[9 /)hNdՇW&luŖ\*0^=5ԇpeiC0V*95}o$@DT38hDQbߍ$=rcЖcՀBFCq0v S+=-dBv%Qf.ii<>i@9[C rXOJ*tIEv$+(u1ޛ?vjgxWԯXU4vU~zgɂu@YW]Nsk@S0A7[;r>etj d'|z1;Y\-PE3=tغ*%D_pO5Q 2wM)#Z"uܷ(] FyN #W|}ՙJϽCmJ6)j29=չ7Q:%nfs)* xմĊgqKFj)29Z =i*qJح c,tWʗDQ1mbjvdm wh^^55z5bhR*]}-U6e[* ޔkxײo2G-V҃'B|fR?N{B$(RETHI5ES\%y;5_!tjVG!Pʎ5Wwnm$w*نw|†â/zWm[,^흚UrW$} $ߐ+> :bi,W}Սӽ_Lޛ(9{]7ni}ޥfmLPrV-8}p*FKERTHkéΣdC(5lN8 z8kⰽU{8kB/ynTc'R=h&aa P.I~ؐ rض0s"6/fȽa[~S^Ec<! g(L\a!.k@txXxK"jD{s(i. KjLKMNA`"KL|j2XlbmBcGISY^&plMv0f\L"pJp RX {3AwF:tl`'[p]h.i|1{EOٻZ |裯_ LI6ZB0"5u ٻoRET>,&` Ly|7w'u?~'ʯfIѠ3J1]ZNYs>Š|30ӵٴ!؟6`d|vRL] A'Q'&r#g6iZ:?'>h4m. J79zTPRtl6la)(^ym|mݤ+mUyڭcF&*p?L/UfD V"ON5rGRkq:Qq+tB>_%m[~u{%DL IyiKhсm;4GG|k\|}_%\9loKVGdBJ U yn_Bbu34x"x7t)*mEUTQg a"C;߆&LγRV!Js@3cpAsv y6~r{w珳I| y4ētb69^쵟=/ޝ1Y(|.|rl AǤw&|h8<oyMy{ןM:/_8a@ts vU/O'<lz *K_\冡7Lw>8K6I?sÐ3u|@d gWS/+ ؟=O7_<&T7Ǔ1BPU\h4SCOBYxnz6_ǂ jٜ4?@)n'9Р0.Ig}<?|)Pd`f߼'P ?)D:e5+cc߿=]M^ s5="+n!a6`oe= !y'*N/dt7'H$"G)HD+E? scsPk 氺e3kx LSf$f,KHp!8yk:0Q*"썢cp>vtK$N2&a$v+lcߋm Qʜ;<@Hc4JWl۝Sgiĺm̟HT.:?͙d F{'JurW4w&T `?NM]eȻ yw!w7{Q,B{Tc*|( M 8Yq)[YJ%$huMЗ^$3E;[?6}Q:BauBun4gg kKsaK]D݃>AaE?R;gb`I^v2,Zqa hw\ M:aZe^F@,9`8l0yOMs=Lnr؝_w2s˷Mz?ߞ5 +݄_:.%,ԪKIyͤ.!s,"AybrX;7(A8c̣g/\BWނ@naBA KhS1;N_C3-U<6 Ұ;pcE\i/Ä猉EL5VsXa0BQ5rX}B$VBkA_*m{ °V&8K8),xvj~8ál,lxtvӘ:)YO Spsqa; K"\ J$eI|."XU+mC?X'-g)S))xBIVpB+~]Yo#G+^E}àY,v n!On)۞,R,YCl mA],Ve~qGdFN `@ $z X8 /J}ٔ3of ZlF$zk6L#U]a.p˥RlDm [䂅1ރG#((HoQ ęyicL 醍6Z EY`"Y'\Z֩nbN:ul!G"mȩucZ_Xl@0l$5Qx#'.85;$ ! R p`(ULIn؈#p6Wml>6jf+6R)]͛SUJ ZVPbm/,^\mxd)WĢB>- g;ˇ|O5}.#pC)^:m_? ٥"}yʽE3B~</1i:kVN ]+2p5>[$~35=h ;`WEFzƄce]( NH#͉ͬQ"$q DNĬ<)nb8AKgHP%E9K4M1)#jm1c{r1`1" @lZN:h]A:&xC_d0b+2vᬂ2x9OFRѦ56O[$Qr+fkbjz,\뤑A#9ZY,_(ΕD~KwUuk)ZZ!Xmls p.舥DrKXrPwid/B]xHLXe͵;WfbppWNj .OKpv9.q %: v~֮ wFw4_mSݧ$bU?.ۀK^rz6IY]Z^#ˆ5 R*,Xa9:%ǂTJd uϔ La ^`"뭽1׵}]6orJ8 Az@#t_۵ѭf;g! qqB>!9)7_ͫw{0@ ?>y7y3q4式lBM't<@3U;hQ;j[̉K*;2;V{.]Z?I5=Ti?9S|Eyp}=_l#|S&Ex++ ՅV2!4UyL#*w7Vy@=Lpی%~4u+Y$wOG56D"ѡ@E -p 3[+@rW^9贈Z3lZ+BDyDRD|:"u:3^:RѷR9kUҖ;6Rp{!{DFQ+ܦ׿qʺ:{4}B>` ͌˂I/Őnujh4My?=[}JB4VN^y+AtZ,K&}灋hm#Z`[H+Ie8q T4hh!-Ҏ)L[~r(o %%3LHv T Qn5QD ٬{ ,)SJ]Աu:V1ǯG݇d0tG b9j;U!9%xo&PJP/B P q<;CđSJFr#Ne>ͱ8Z˘fCS`v+=\yJbo3AQf** wne!5N2ԭ qQhב \D=QN>2irhSaDq:e61˩6W _k[ʳtg0{ZFڤX} xZLAiZ#oA/ i# )-k\;0ѽM9ߧxf6r>=ꏥ=SjGS?Ĵ𩨞z#lΥB̀j6W)|W)|UM'/flD^(b- 2ro && Ʋ1d4%y.]̀dÎھbT#YR|;G ~oFs:ZTV Ig| GݫRU*]RU*]UKuU7/2. J]3iQ"kn/jm_~0wDBo.q]᮷qsHPS1zKOh:FJAd5kDZZzӚY=B'q|ϳ_#X1h8 rQ`MXAtB. ,PEyhSg2(+@ԏH FJ1̦eX5g%DB# d !B HMus b\;X9VֶzǝLs֓c.-z%BP 6* @p"x$Ö: ``3 sP^GoTj'/F/GPNH7^fm6R)GQٌ9%@<*Z 5Rl@[MU09\0Q,x#Vvƛ(YJPI^q,icR81^#48u*S@6#F3!n)hbZNrNbNb]jUI4PNbd&x]#10XHLABJ81pٍĀcf ϔ$\< #&CqnSxG ;3-nxS \ F6[q: v E B0DKd#:7  ٺg& -NbL|&:*lGaրL:5&o;6DpG49!@`<݄@?Yj` l \t8ׁq+E㥾!ig(D>) mErOAlDŽP& o4!ø)=c[L .-b ƣG;0WNJ."ZVܐˉˉ).ʟBO;; < N3xRi9yj6OOqpGv}wMA!vt?vZ0D,كN {Q9ީ$$-Ӫuz+ŅL泙nZAD &QRn9Z*kVEЗ.v|wlel>foE=#MKb7e+ `RtթbHV)qaYR~nZW*Pҕ]͞lXѽ^]i?fPAAzvst꾬~p/~{˻^_Og+WWT*_\zWt.8E1E-Bsv:ǹSj-qBd/?]7#{rXS֌X堻܍!Vi*g@1um]Fi 1R!ȅ,Xº7(ėXGCL6d@]۬(&PCj'(&S횤 O-_?~˓\@3 DI*RhLqثFk!v9+No.GӒT벛cpzZ/?OsmfVHuX@Oӥ ܙEIe0Cp ^x_[+_h娽 kG[ , 3aF cLd'X#N!cTvzYIKN" 3ɞjf\i+se.mu0z% =T}yh!G_n3U۴uzmol /b}x2fRVNގ6Eb%sk٣ Cr`Ty\}n' #?b#3f? U+k,5zJdw/e"Tj"0-&O/m:5+؜mïrE@/ZgkwNTO ZO9kHCLdB%GvRK,ǝ}H^0 !P- E=ɞ_aIvύl4MĐN !:bq3x~JRtCJ ?0?!XƟ՚gvyu+/3R6[w%E߻0Vz! j v&(u^ _ƙ1\xd%5rN[  %FT2c0kN´,oVOfƫs&[hg˲Dȟrr"Pf}h*]''}1?&uO hL4/bN Jv@7C>ŁJxƁ4Қ0p`"^{jr`ׄ;+3$mվK}ZMQXi8kww;RH#*ԫH6Hr@sRRX " b+rNSC=*//}yc  GvJ \"C2 I1  -'F l&n_:~ޗL.UCT*.#z@z'*D-MACN @z[n`}#YI&Mnv%#9~䤇#^T' 5K$oYRɷOnb[^ˢp_fA8HG>YW<|w'<VBgݔڠٴ&A=@T> $]@hߴ3p@!$ɭ7?rϼX/> a5|JV f",{4.-/-[|@U%[$5-21Dkz ʙT]owM톄RsBU値X2c؂ AB ձ#&J" F@'gHͰ"cA`B~#4&R9a 9;/-NJ8s3{˃(w@UH8eQAʦLu3D ”9&[axDeCy c "ujKgcqZF뚠[bMP]4˧*ueqcnN;DVѦ[ѭ y.Sj+&FѭA侣Ntnm*V45!/E|*/S`OS?P1*VM\/㒾ZrN{+V_UI0yǨ!0b$fR΋ꅄbմ"m\ְ ZٹXydI.V<ݻOVjE "Ŏ'k@^48@dG[͵T1ͯ5*J+&gAsM@Xg{0Zҭ`1lEAA7tEW֭7~v}Э|L ڤp1Qk0_9 0)L*&_?jUwSx#UHyoӈz~vV]0$=|v꧖ T׿k`>=Nwx=N9:#U6P:۝ <\+n.a_h,6ڼw)H.3RuBzX@ 8E+#V!գR ёpax\Gn# 1a#6Q=kz۟F &GƇ {i1/S= `iC!|{"yn݊2@^-v'kP`z-NpCiQsS[SpNqPhtE|ҝ@ǝA‰b.̅#gAS"#=$ˆKPPƘi\hɹJjHkiWm,aa2g@[#DqpDA%.}-7CjJl%dP*?MF #` T{; gJs# _%E+"<*< Ë\ o#AFzja1`^ Y^U!6by [ T]!> p \tsJ`¿BL ӂ 3p.dA ׅ/P&e{~xU4?m %'ۣ_~zB aUŻ:!FB"$0 Y#-g!bE W-g[bnbt|5t04\?BX{( lFR„.B$0-d0ώ~Pj"`V徣p2Ǭ{Ȭֆp)IFE7Z͸n$]:m3ڴ]GlES[]4ǧ8ţ8 r1HqwԉnC"wݲMnmx w,j|_K_sbR?ZUe&+<2Q떙=1VRwLa{-3-x w,at=l.KHj^WqȘAgVLy51y.zS,ѧX|*Ϝ3o޽k B-_OG'h,1╚Jќھ,1ZwX7'{hvueT7wunIDZn/~0j5ߖCrl$M#"*èY~.irmYmzVId&ISҳ G&9(qڬ+Zd%χ'9J0 6BEBn2ÄO$>Ri_)89]zQ4J*0ImYVQSJZWU+”X&|3` `*B4Hi,J 9V1 0F2k#6|J$jSV! /VNXegR,J粵zoB$6[lE{j >r#X*C+EV63pyb;X@J! 1_. *?qf{nDm$YIoW}ӷ%+pQDBIQ._m8Ed왚eNowmGЃT֓ ޕoȽ3h7l9RKh,_R5l|]]nid$U= b7-,I&q91H?ɳ/ARt8sCv+39Z}ZO-"Ĺ~")}EwNƑ2 4os@h%_pdpnr?}-ޚgz_e+7!ݜ 2%EQ>q^_&; )mٖے[IҧHA͒M`H\ᰮ`\@\vam "u^7Hqֽ Ux+"{RIa9eAf}h . ܕJYBF]iQ,_4-IA11?C!%奀7Vǝ~ܔ|rK߁ 5 i#qip d41'jSㇼoЋ,D\RC4vNQĤ`t!-I84Kys ޸i齌BoGw& IOoիGshz{w~3 1 -pٵxpŃs-kw- a`1XB`#Lal,G$I]LNiHoߎ۷śIDlx/~m|RifU@NŐ8Q IA@DT,R$8(2f#xT _1J+~|Wk"Mt鬔xkn6Uo:8?j_bʬ)H IhdףH`k9tnm3Pil'22KQj$8KE1Ԥݫ&$IWa0՟\_7p쾏TyY ^g[tL=vRloЛXMlv_8~2vO  wl@߁'xX7n#uoדuo0L K˿3Q7xXن?S`DRĠؽ<9Cn5d|fcAҶ$\$@FI-~5D9zWlclXӏq~  *rP:k#ȅ<"GEy,4#ZUEvlo֊ +)`l^bw+P!N'J'FBOx-yrx 9N7wp<ytWGkL`h^͂ T%I(`t(DBuӄԜ _/vww{#08F2@)HCօ\qV,J8VҀ4wɔ]&#mJ=:,u9֫e"{'"OźMf0?sRY~T5`qAO?*#J>ʴ;*W˭W˷α<ͭ$E963kV%d>B >jl WyCяF-^~Y$[?anntN36-9Se|-WaGx}~Y ߎdmAK F}[Gx/!Jr f=ӿuZGpk$'NcS: nV]O6KMv;իYl o?eo˺_N|)q&D86Yc0EXbc$v#WۗڒEm1WrT*Ϩ9װ Spt¼qc:s6 ] ,qߪգ?`͜VV23[s e O_`'"3q^$0vz Ds Y4=!%8\ 撙syxo,m\W2]QXIng@1<1GAU@ C$" P7 VVSݝ~\]Z`#ljIxos9?ڋgo$m-Rۿl=1VxÀ|wN~G6rGНkM<;!uz 6eC&=6YNj/\D}d*i3ߑu#0кQźu8LsnބnuH.>2ylX. s͗eDǷ*֭Ubߚu&uCBpw%d=7"rYwr5/e.BajK,I<Q$F6uzty;s)qyD§:s<@!uh=$$H) Ä3`';Ge/ʊhlP) #2!)c v(m{/b3!hw"[0ʔx2 LȲIGK&OLhBOiL [T0_Nj\$'g\p R) e .V>Е! ZʰD}f9üN"8wBxZ2'ae/P٦I-˦0(ٮ_.m+ K9!kz2UR5bp|{2Aڛ~ v1  ܁3dO~ktϨ^Û(:Zq 8lL9k3Uh~V.׀.h9w;fhU1arqFD>VwfzF z݊`^> ϥlNt'oH}Rg<|y[nOWG }o$A݀O]ps7DG_BkKqh 0=*6uRLn4](%LEP_ǣ~(aݫ*+3&jyQvބe?Qɺ$]Ƙ$q8xtaԒǭ*mb#U@⑽$/k!*aI/zSȋ:IsuGTޡ|~7{ji!iu ġRؽV}%G ~=q5/hG`t KU{;FYV3+XZN! w^4:PHK#իh{}P@SжzUM'!ndӲ r擔 5-9wjJ<⸱nn44ʭsw.ѿ4/)W%$D F m5H 4@Ja f,*Dp%GHEi!3s!-%|YbNF4bޛ^*!^gA R iw)N-pʋ^G7wLU$L@8 uLD ג"Қ1 &q`p瘠$.9pRrdDםԆߋHKp"), ѩ,Xh_$\S".&Y^a# |?v.=q[Yt>K*YXwPzv_uÓye2+-']r!tas/HI(hɮN(S=X:6{"AKWl?[|Σ))@eJJtLuR56M Mv]j8 ?=3!xvv|KҲȝl' =CmSp6 ɇ :$ #lb_z7'sęU0IAuHXwE|sZsU4Ug; @/i P,:jᖗ*Q^𫃑Na ɥX$:4%8侔ʠ I)( R^N.-']wGޥKa}!ա@Z킌:@9#e^!)Ψ>>.U3ѯbFvnzW4:^+O 寽D׊;U{n*@)eSH( bz/:Buo8٪ ="ӽXr8:fʳy,Yz3Yg; k2ap<5 "Ε5N OBx`Pp8 = leMV '|)Ip"MpvW;XwPڥ`X8hY2VyІY Fh#Fpp$<`ခ&w-D8}pMz1q,HØ12a%*ITP#IutU# A%ώw)wϻ\x/cw]25$|pU( Vi=oc0;XXj%p#i#b{>"B_%ca-.6f\PBIc̓X'4Cb 4d\3LGI{8(d#RiF JIR[B0)J厔4I85L@XqɈ~PpR+b/B*bA8"|b C8U nep-a T0F Ach_?BAOp0ozE8qK,qQ~Jٲ)?k'i"F}?~b ؟%u Uχp*%b8j ̟ TJgN2G4E:h%, TPKI8S/4qI5Q(JCH!$!RXcT!%f1@͹d{A̔Utj3~OOz=!F-_|\q F5` q#06FH8!K,-DB2{I<`S9rH+4ӹiV F%B_~y[#&Mf AYyhDIc]/r-kIPSC_|,2لVŃSׂ@Jognҩd+KiXs׎>O)3dVoY#fѓ^f4J@*QP;$^qvo. s~?M+K8 )Gv,)@ǻ!M wcV77W /N=)H񼄑#$.=^Vik9DZuh}H=POax$C'D)XQSA,OGkҴ4abQ`H[R<#)_Tl\L$rX;?=p&\ܛڨ8swXqX vh>c\ ^ K1K0`h/x!,=g6RfC{bߖUxP8e~}q)^KÑ+~rٷy<9O<S U[+Sn5PaR `*P~KeIQ6W1ʫҍNJ`5nBX0,C݆Iv˹tʅˆ3UC4KNEEw6wٳͬC%b^u͟j44*Z+#E6]wtQ&dDr'e12G~I"!F@TUr*@0q;Y+d{gga] U i%?^j\2z4~N;il@M1%( 913H )!BÄm )2҈c`J1!R6 uMF(ުvtMoq YSތm1ޝ-#AhNCR10\ϞpNY##d֑PB7H(Q& PIE I $p\"LK+@%II"ᛄp[KF4!E1HďHg ޗR@,pRnKL1#0Xr"^/HHY&I!HcF)!(4B 8|<]xye|e6Veiu敎JIj1$xfdw=B#n&F*g {z=R zS0_f=s; RՁfp4?:)!f] HaL#%ګNt5WːSX0NH1 E"1Q1>e"#qqOɻԇbDZfZ-᾽b0Q12Fȸ?!ǚ 02$6GQQd~=jYߩ-s_=jGd<gG1+{0DXA-qcC&R\Rʈa(IP2/{9yofZ+$RkJ(JQ)%Ak/82O8((Lr˗^ر@8*Qi'fH 74!J!Ǒ 1Ls\G/fnmsoL,edCd*I*]0ύ E8J(bF΁0Ivyd} 1 59n[J LJN1kf@:@9TmIQ/'mBnM@8):lǠU3gƒ4>YΦP X >5xO5u.jRbXWڤ&K~2Z0~Ͼ 7X_/kk$?$dO-|kȱjf謬\._l0S 2ih,c6*DQ8H|4M0=B##^HG&1 w!aU5H A8S1'p$Fq%hUJ[%-PU1C"-`%l"_vV@f_̣h'w'ߞ|MzJw>etzmxn|2}7POrdl$Tᑙ%;~qăp$ '2Ÿh߹ݸq۠|yGt|(K0RZv&4v+CB^zɔRd_vh7_>hLv&4v+CB^Ȕ`{{My j7_>h զݼ ͫʐW.>2E`8wAvkpZJ\Rv&4v+CB^tioYȞvT6h7_>h&Ϸ$yW!!\Dwd)'7 u1%EcA2#b `u,Y3To::t1 co~|1HM `d egNf,,@6 7ˉZ_+-&q8ׁ"q+K5O(iȶ`S?zi0/ꑎ=FIdǨg% uQtzQBv: f%FA˨ EZd`0 Gz1%~g{bx ;`z;ӯMv@`g̡px}xX%v̘I1d&lm#_axI}0V4[ < PhK<$@$AW[B&HYYyUp$-ޑ,l}Ao2rib!NElÔnDGȒrQcr\HQZEc3|u* !a,Kvmlʼ ??s\0fzI]u[r<2vx8'pSw/MʾlDH&2 PbB qȩ"[HH5RqfC޿&n C6P..@A^dĵgYQQk,xUvx/` _z]bpM5up4 x.x<@-w. \ uLEdHil,`1/7ZIaVrnE$pMqZ<_إ[P3pV't5t:B24 l d>$>"z&9<+axIkg]'C|gI0N%>zx_.'m( m~5ؾ܅;S7(ovejW"RV+YB cg%T;uQn>Arq[FG&քA$YH!a@ỉ+bH&l*FΦ6Kr>;"u*1VNZ?u+טh70<GLHIn䟜t@iIQ9ZR4@h,y߼o^7/yLJ +vJkKfcUY|䜏w'鏶rJ(ac,R8FA3trR?2$nzRgGHQ|)$Q1n  ?ɬK Đʥ>| z7 u Ā9%'S 6AMJ,5))+ZXP0%Q,uDtZZ'bHDq܏X8.n[ }99wƻHKeSK1*@9@=*H3!,@-Qk)Cqe 'CB#WaTAJ"VoV%u}ov`y;d?m.Yȯ[QaaRcc ECid*MHpɎ_ }'#BF4:h, (. XAl쪊" PiF>MhSPJl'w.F˘iE ā1pYQd(o^!@X΂eJHP1Q,a(BiiƁ , q|%^kE!_7pX3* BPHȆXr%$ j",FRo^Yh%_:`f: S+*TR3% %21 dvirt G,Uߍ igb20!Xx)6UZ$RP!" CԀocc&+ԶaPJg0/EiK}jXS5{56Ԍa,p@zջj֏UV z(/N+l*%s\̀ M Vt ʵnCt@J%&R#,6,13F(wQpG *!P"?=ml''چVѲ7[\-n]jL\VKclZ.躷]&@V1gef _>̑ %"spwh1!1tD85O{~M@$Iӝ(AN?N%鄁u9j舯"P )-rL.ܰFfu%4ȿ>RRYnz<&kؘ$?q֗|$&}0R<9KR9< GkYϾ fj6eXnNx 7?&IfJ "ӛd2z}4Wfd/|lyOՏf.L|4돇HttLRgK_f V BE^H,)1Qzm}x1 /kV#d,VX"p.PHZg-$K06]4<ڪzw!@']M)6y|^=/P4}ߕ̬_[ܽ]-g%0Y޴}p^^nHدbs)hNȳrX&xqMUA#^bW:](Wgz`ڜ,#ثt wX^cE%85eX 76=d,ՅyeMJ)}i5@wGH0Acp?ޖ@"O{l J7>yzZqzu jp(Fmv6QdzgL \]l ik)~ mCaoFKWGyX+5ɯz4݋3+@t~bʅ)TS,hM[?**PsŨ?s$?m$(B*T p`8a,8Dq*ʻz=zpʊӐKfh4MGqwβAef0 R<\pXVZ##?~b&ve"++쇣f{ҩ=3BaM0} ЎUD# BG;'8^DF0E_'zjD66m^2TY] i,.c=Rpjg;LC$?56۸ٕdצ|M\#u&Sj*S Ge4cnۧ sb@K9C},#!8Cx(H=oI) WB|㥻GWW; dq:c9Cn/s'?k/XKeN L$IȊ%', '+dCwS\c zM l;mR(6MtsdѣƓ R_&>浩Oў܍-U% <'RHKƴ"S*l%)Ad%ng V t阠 ĂNWb}LaUK}*'`lU-y OF|%ubm7o*k~~4 n#GYi^n]D6x(Q?sbfN|+ژu?![jukx,HgNS>潥]-zFdȗ33сDa5De}V <ؼ~t1r3Y >2rJ*̐va۩Y͓L`DXBru )obkO=Ɠ<\Iㆿ>fL3:9N^_*b3Q|wTђwݞW#* { K|{s౰K7l϶]/e}_+0ЌMl\p)Q>Nι$KRp7~p$dȊ# ɟ稹:-9 ݩ(gXaEyUu#չ}p呔Tq#R|T:, MpkJ#U 36#&Fٹds\7SҌ$z 6vXnB~S<V pFf**໔2ԥNdceJ!fvZxSiAU%XnɨT1U+V<.-T0⫧|5ZO'>W/#Gp]˴QV{Bv$v“g{TROU$x8(>oҞ|s)HRґD.%JTS`xkfxBr]G}fXlz &H/A&3᝝ANB\nGqȣKo.&%tXs|]bI%FѢE!W73%eqɖ^o9.A(7rm }"6@]{=&]hY"5~7=N{2_I/@꺗`O6Y?NQlevr \`yX oI[>xM .y@ 'Y~Θ~{a{kNx+蓋iTa E Ȯ ߵҝM{.@ K;Z} 7KoH%nLWk }O^?%ùׇI}ѓZ0vv%'Ap1@Ts^jXbM^''eH˃O8'5'o4 \9IԸby7&7Ů4ζ1 Mf9;AoTUώΧefnW"޲n#^],_-榗v[ ޏ܊[xmfqJgv`ېo)8\hrN7#] W$ƈ~I'e%o*(v8O΢yJ{`3WI'|7.Ӂ2bw% AM݈eZ V^{]ӥ{;WK\ۡNx˞itqDQ'VX(%$g!{apESƹ{Ъ@򦁧 c@{CQbM]!%bp9OM8OkN%UZi0YQ7<=XviBZwS}I2VZRB[o6dq*ny`)5|/WbC K.<<^ϔ҄v3U N)Ǩ"$NsCrtA~&ٖ BQ]ۃdBu֑!o(h.꺅<0Uvd.Bŝ},Ph$gFYQ^r,W̕Դxx"nf9o$ Tz4|jl(Aeȩ KXy5i@Ve' ǂT3rW D-f#P;5@X3qJn~_6]x=|ZVt@l3@p&uѰ1I޵8Ajyd$o8ZbT¸YI5\Fē ^rC15嫛_2m+~O )Kج{Sn R?R0R XƩ^򲳃sD|hj}tYAZ.+ΰ#"w{9so=na>m1À?ocq, 614>jLK-<'AKTwݬ*V%bUO-PTLŮ6d;Xd2Y,UXe]z<;˛]eX1$;]c{4g=XR<ME]"DRUr3s)YT3]\ǕC;"8Q qeYC7KDtzk2a78~tMM"-.CM* ӫ=#UH$4vP\'jz$5JRyEv<ZqNPc~&H %R%*iw`cS^ М!4hwh,*vtNĿ=s1KTyJglJF- p-}__OVx |t:a ᾫ 2lœ@&2Ờ:&g2QL$+yٽIc-5gM'2 (1Q '8'jlr#3LaHx^Q#A1+pG>D dpTpN34LZ].po0&R z:H ˅NjjQ T:k4Iq#oOV|QdӢ.{92Y?_8 u!Ayt&;"O~s{^<.~xG\w会տ #;v';4/RTcZ, y'D=GP#2Q1AR(ˍ7TcoDH) Faa3&HFw Oa+5;bEcbԙ]Ĉjz-51*Ӌ1T߯w"Q펊QWjv#E:t/aEƉQ]y E u&e{bҍQT9rY9D/Ч|h}e/:int:A8)ypw4q^<",Mכ5 B]VѶA|$XX\>X?e؅g?C4]Y]N+Uw3_<-C惮( = w |{j}NvS6Ϊ.lE:ouX߂8n48ζ )D)U!~Hs S^І';-{BYc(ΨR-;.t׏Н堃 - Riſ~/gj(~ԊxD'EBd?mGѨ DŽk32mi=!&5y]iH3J!q˻ɥFH~wEp$`:Fl?yre=ɬV'~OW`\zic~W"mN4sшIZkqGgH+hDt8GDш q QIpk%,ɬ`,S$6Gw(&9Us\Wq>-Vo2 EQc+n 0*DRs]Ň[esZ\ǹW=}sKj=ݣXb\eȖQc$1N7N\S4ϢX='q 5so8~=̼`g׹}N?!gm7挟_(F>i]@fM)E 9VP4X>Nfs$58gPLVXg LJN{AGt!)A$:4HKջ z8JNeڞ'%>.*RTQGpOHn>3ّiOu.)dGPv:m[R0r:M&%ҧ'_qZ& :dL #9B\;2F1dbes3(3b8PnyӊThKuG u}aM0eZ(a dҖ* 坲.d b5('1ud֋atF.yD02b`8α!+P_Mœ+,#H`P#HRsy ҂A WNfDqӿFo~}~1'zk_NuxʿHYl|Um#Њ=z4{wPH24}6kܒꙊ"Li{qDҾH tNQ&{(GpBNzA'Fk m9lİ @ ۷J3֔Opp$I3(Хn}^E QIyH5Ϭ>^-kJ ؞e˜]yf`8ڴ CmIR#RNdȨCX:D, +UN<ӭRLHD:ȃƬ-3NS$o7JyLyO\(Eš0ϊS.9i@\і;n5!,%B B 2@`2qEsB0g#OAVvZBM, $ Kʽ$ X2%9;?0Nh57P AK͠r YQ+qFEsd.@NfpŭՑԶA֢t6I>aUUb Q\GNpHq)eTJK[j@iC< +< DJ^ i YHt-sFhyM/o13cq$z)漚MlXbݠG:知eGŲOef2h %HZrC"*qtH*ݻmZDՅc@R9 &It4Hv:p3^櫻w'jT}'jntKV.n(NjƌcLl0]>XCڧ ROgT ̒ ݧ4ks{Tz$LN*kԷc5|r{ ݸ8ĥSR\)7=2.ϑTx>oaOZw{M-묌PaX}f{;vZ moo?OF*h:!)[~@b=m?M>\T|0uT[o 1bo/XV #OۻN`UL6qd)0:lUDKd^KI84|8VF]\AEyY2E26 Xt:232]듁XZz ȼ+r, #4Mna*Lo'9ZK(ߊ( P _7!|pjdwZ{5f%F28-GA9Ҋ|)fZ$ZZ*8S&gB 1sJzu$1"}5,Paf^Tn^Β§\ J60GsNQNLϝ؆i!i+7J=CM <$ɞs ]z{OC5J%4)iQᾞ[>36˫Rվ1FD3%ŎvÉk!4_1ƙr;!&gɉ9'2vB|\,]E8:KQpN}YQJ4iq_Z,exwc'yhiΗ=ą/-oٽJf*ݫdvfi7L J4jLԈ OQD^, (+ %%:_G?fhί.N~qv=Z mw5H9>yۧ@K޶8AZJJlOJ~z}8!mЗv5*ݷߠ.:Z832& tmQ (K%7}).~Dl/a:73SC|33~MbcOZ$C倻;A?^h(+_wHj;՚[M9]|cxf-& í*RBR3:RŖJ֭ ˘\cF |5ͽyȬ RE׋w_ `1#_U/;Z/WJ. BC1`-=b0Qh aU]]&2C%xiRiLAư5SU3 jNһ= 4AZ{3aҚUdq/tE'\qtV_NZ5Aƚͩ&@BQP P[ %M z5ȼl%X0yF'!=:lmIxdi.N@8Ka RRhYK0y( jք+e$"-´^ &ߑR^SHx\YGjw '6$L^;$#Rq$9L"v*D T:#U@eJԘsJ2H"Ikɥ9HBT$mL)pG[)Ec#b-\:K_L!e թ?7CHWgK6K58S]ll)8rCQ\{Vu|0_LoYx\O?}Q:<{l/-x-5Ez?kzT*eGmOC,I3l܆@qG*#RФP-U߬G $I@ [.q m%(mqIrԘ ;{hmolqk'-]U'QpugQ<>B/WmR:C&sK-|zb熒Ή=!mI+f=F./&},:4K}qϮŁP}rPtETBb0ǡ 7.'_>ʄb6hoE2V`*Re D"[_V3 u!eǭ=7g Pa%\dlMbuȬg 鰑$PQ`4cU:k$=rzd Ѻsa8a9b!(?-g !mˆkg=᳑7hxW7cI Ձ IzޢSmK)XN╺'y$)?-.j`Jݍ؅~:ϏttKcInbg<%Z2Qބ?(Tt RUi7L ~ʔZR/{pF[AǥJ+pQ9gԜ`> {܏~N|\b~lJ'we?rϧ}D# x@񣰜ٻ-sס2z4'?~Y M[18TXk: D'3xGZ}S- 3ushNʾ]r{mKc7?]qR[' #ևc߭7*C@$1PoHLo \Ҧ[ VM+B-Mw‰FkwK[*ҁn ilD}X@4xm8:=E;S!їlp~w iřМV4vqlťn)]..s] j-DiQ8)ShPjM)Ct\Hj 8K3^MQvAZ0%Zqs.j (:l6ԖjFrBՙBcBUά^FB)S;&W/??WiJUOzYEg62#tr.7JkaLx4UXί^;Q库ʃZ\ۻ@g6U6OhPol.Ynqr<#ѕwQsFݼFn4wC2Jo ux~=܄Z0p_:ES2X4;`r_eRxXr&<^КR0She!&W-6~jymmUwdcc[1o˃svj?3݃2 {tsQ]]vBZO]9䠹cC rj[lJy޷vm\ |Eg,(Ĩ悷|v!Ŵ/vW<L!lK:j0}5`RTrCukCVC'KXKC!o;TIе0ޏ0=uCBq,%%!1xh]yc+zC2Xx A[!VnA,W-y/^/pd~/I=:@Jfdɒ0&`𲌾tToaSC4ߍԇI-h8T/_5ǴlGx.]̸ћs LPTzcYO%5^{IzR J%Uƀm(k#h+,'&!O`R-oM+P&fR:R@P;%%älMy20y: ٻ6%W,١R]m@O9ξd ]}$lp#CkFáLgdòDꪯV,N0/dJ;=3ZBr:8ORVi-xnI[̧(,zRGB :O@H"s|~ed )8>X3 q`"!:Dbg'<$쐓t!Fnhɑ_$,/K',3J.'8 85X+DruRFE0DtB|c0y Q7{, QPQ@Durww{^^?K>nn`%zWi;e%Aܼ{!>^< zFNĻ<>uGv~Jī۳y/*؏׳o}~9ez./Iϔ˖ Bӳo %g8]M cA Ix=fwTV{shR ,@1E-m/7Ak5#QrK D+T/B %SV(($MSaFb| I9ćQBo %4-c(\VISV-qۛRQf/*mvzA&c[hb:$Xf5!Js bY5Wkxmkظ0oby} !)T՝\VȹykUN(<s=yyG 5E(;r$2<Yo On{IŋuYHw‘Ⱦ}F_)>)ŅZp1Dt|$SxZz-DWYsEIpeqmM3G?^E}WBST diU"{?wyk2B]thP$EA_*Źj(|W>`&NrP})>n򉀌B.mD]3Qa&1\W^94K5ڌ=ҽ-ӴՄ\(v'7`'_ V ;US\:@i;Ri:zRRUtґƒN=FC_B5; ЈfU|ܼt[ {uZպT3$ \:mz]"A|}yfdZ[Rco;fvd߫լ[ewqyLliMa} f]?,^:Ξ>iMhXgwrPu+<rx=`g9ďXҴCgEi>옲#Vśr&_+^oH-.%ZtZ,{{-4BUyڍc`=+&2ЋP 3^}O 6/\ `bc!,sc8i-+޳3} zm y-rC\i)֓}WGfmF>)z /W[z[7JpqjH=)Iwl q2p[" VAfM,ɹe_/V!((}9- Dc35yQnl+'T1G-lC2m͕M?:NS=EK$;]ik))2}t%=dGXg>o[dQ*±O*Q tć?`Zm{b{ 炍'Z[-`F/uCGx"{CX}{IWdė{Ao^1=zӕdүiC^bZûic}h3NӆCGqIӴ9̑봋3P i_m5U 4;@1xǴS%mLH?e.75#eګ2Upu2m iH5NŎiqCFijjHh%鈴t`M{ڡVQ<ϴVK*e=9ʣ8iZBԡ98tpia q)@p5RR6uJɏ>ոG- mUJSu)5̾e(6|+޲Y rmbuyҧ+\L{_LJFyQx^j7knˋf)e@dQHeaKH2K6 rGH&).ݷ쯷wM],|ʆ>짺 z,O 3gCX]5N n$sʳuR mCm`B숥\dZ%;6[j8^ھ@y)s^ Ĝ11M߆JPBe0Έs`>P>:@D?2%"cg6A|Їت9׹ sau^=>oy!/煼r$/MN@~` i"u^:&G):{$Cr9lC4Ȇ>6l5xO]W]P0 YJpn0(X^iPQmV(VX*bԥ+rrpɲ\,%с#Odxܸ%nD2x΂m +#H|rcvA'r :+ E_sl$nODai54&!MT xΔփZ&d' hhN-烰`aUwU+G.8τT"N[2AHAfNa>[PĞU=Aƣ$r;#o(UOwO/bcWW],3 %ZPI'Qx VH+=5by }3 ˱7^KZDt$GLp32. A84OV/8a kRBL@aw>u0QEEotOJaE2uLL ́l.l"1:oSFAW) Ovz;Kǝ"da9H$!T z"e,u^p*ˎ@.щRP(zB9w)Ry)z<_/]Y`ev彯O7Ż4}՟ޙXP6Rw?^c -Uķhy狫D]ݞ*4զHg0YX֛Q% Ph9cJOϮξR.Fҭm&5W/$%Fr-@0N6Ʃ ?]_SS[Y]v WW8 a>_[z߹l[Ԉ)56V5.QqS΂v Tu?2]-zZf=&l[YADzX 7D,XkR2ɿ6c!# pm[I5 t4@`̲‘ ܱ]*֧Npc[XbE"K-}eV$l{W)PSA!X?3ضiْnɹ"x2h9$JppNiG2Ӝ4d!셟ixe23oY֔H`;eEbHǓCn-X}Ahwyג 8aĕ*Ͷ Xq9%Z". E+cHd2Zk!H9*%ޙLZrI -L=o.AXu jZ\ rNȌ1P@:ޕE0* ;xSI 1ʹ}#@Xz"ɲek@$;iR 9jhR*"K $QQJ@c",Ё4xdrc&@<,e4JfR$wh#z3ٞNRPN{ RGY "GO̜Zt T<3,!YN ?I}hTjλI] r) oi1^Ӳq.7ubF9%2fpauQ @qϭh Rg fNa A58,O2]wRj[*T,?媖]2Nߟ)ZwD-4/Gkm#GOw{3wY~[I$~EىzDlul9=$b*8?M_~9]Z _grE73eb37οrwMnR:l~2KX~ rq)M/%_N=bETa.UwaD2GJϒj0JϋE/>q]_Dw!Qo7gɅY \N]/Ce0/lgfQhi >1щvxQk)˻B :Lo˰9^ %]I%t V.5Cv\TjgA dKU'&3d$Y0,p-b-$^F=m;v+YɒMhjޖhQGH2f pIw.٣ZV`FKaW$N8F2 8i qr9G40,r|5JDfuЪ5ʘFVI ]V2%@ ?㮺0)o.ßk|ZVti Sk'o6 j0wS,Ÿhȵ^bZlTT (DCFձL٣(AeU=]tս,;Q.ja8ItBټK$EZ?䫝b~{ Uʿ~j>I:j3٧g796ѧU~{e (u}^6s(_=ŵ[˛Ylz nia:;K6\RI:ib*%aFT[X%ouX ]໏W;sP[JpeM?]LgYjra)Lt|;ءLG]*56 ͨJ1wǨ¡ow(e q<{C+L8~Z2g!>)%NŢRxAic$DQZUWoS8}S]F@BҬ`;z317*́11cub!h#c)]cZv<7=0cv7O7SWl)@y,w%ҿj넱7_ľ 3YM5zXsbl&7ir[|qi u|R6*(r=c$:R7V~a Z .Ml42;>.L |B*h*:@۪+t9Ėw| igeؿ\,h!anxJc8b`zQL 1˻eΨ~`[b#r—eD#$J" 9v F4|ƝBSϮȞ^䨂T:R:&8y.>yޜ}/r!ջY<[-O3.훳3.`}_Nߜd0_Y^]N@C]=?*kb#˾cWVJ0Q9^cR}gӍ#?s].k2fo,@f3E:vG5tO{Y|;/fϞXHbS\t5FڡZfn=%Ba!u)&!]d_ d3׵t+HT"GU*u Ƅ/f6m*-v 0WFxjd;uBU1x"Do>=˟|fe+>NDS_^'B!zZOFdU$^:qϢ<3X;ubkVN@eu9/,B/ycBJsW![LPiϹy{VN43.+3l[uW0KSؒvC kzIZxdX}n$cRYMaYXzlS3|[Ļ]a;sy(ؓcZ #'K+rbl0Π k^s!1#s @.SHW/TgփE2ά@zhy=W#PntS _{IS2:N?.ф"lGӭp|rN 衔h?H?N@>fA1L ތN!GNoAeBɀՋ YEr.ܓ_oT'ZU3c s0dO*G Rcݏ:> ;@IbhKuEz΢p])m.>q.-mb{@kv /PG@2JkLG4[ӭ ة VYKd)ߟ"p,GM 0B|rb |D1mܡ[ "hSuA푕ȬpǤWZQ)@P:%L:X vݎ8TVJp v]w]xh8;J]Z!y4>x-RSQ '5aze+RK!&@;6>llE} ? H|%^&뙎V J^%c&Dt^aOOcLv LZZz CHFCxۖ3]6Oh%sY{itaz_diw-fA?tDe-0cf͌YDJZ{ 6@19LqA 5ufaЪi7 '5@,$V-]_pFH 0QRHn5@6ظaL\pW{4&3_Br@ e(9ۭYpFP`PL"YNFz.R:˚q] >.~1%۵ tf]X 9="ٕ #Nd:E2qv9/HZ8c91bAt" _G.%s L Q i]cͣM `:R.lBA;2v!(CC~.m_&xL$ي (l@KfXPtEVq2NaÍ61!͗H 6\Uo{aWRFz tZ/pvс~%X +EO2py]Xjf*XWv^/36n2J^RzћTdAND{Ⱥ{܈F!HqQ@Ś=*,BqkߙvB:珒^Ju C7.wR4FKq L~փv b%YЗ}اߒ5X"wd 9TrlRM>1R+#S!hЮG Ёc<`LZah3Ar@%Q+dt)IWQQ 7ZX 4{|`;j&<;=*>f. M+F{k}6r/_|⟮?f秷O6s??j2+.~voq<;Ľ9ve̚t8 ̓U}&{ZΙ?=Т$WƆE`HJŢ :(Sr1XAFrA9&Q&R?l`Sb;3,Tr]iDELtԐB q˱ud]+mR J ihv첆|։  juuN{ Hu0)+vV%iu`d4?؅vn0Ɋd spu{ߘURѪ^V)Qza'HTsMVp&Lni@9Eڮ|P"vMc(%_$pDV!B$ bTb%:qQѡD"މw1F2o .Z]uI1«s-MCxsxG|C!aP,@jCF }FДG] $8 ^|1YU鵅 [pLTMF+Mϭx!J `t9K׆9~-ά\]1PA JC f~scVȖ>~_6YnG7(؇eH}wχ$%˽vWV|A-LYlg/ ֚z2L/ڴ߼,ožOi*PH*]>|XhCvKfE_lx2 orgWqAӼZw umV*A2eY} 0 JDouRt!2<tgKgPXU |7*M\xm :Fkb!4t vhoC;ָݠ&LX3߀xZaL1dXYn ZZzž%Ҭg&s5p.4$- +M4>-Qz 32YCnПq``{W C qT'f\="kE9%.;&P$_G/{؝e rU矲r~/A}n<lqsx 8\>D~w[[9cma9XH~qVsrF39m[ߟ> w:;"YiZC-"~*vНC ^bs ^ jz JfUl%_?Es$j) U4E_F}Jws._T4DJ`^i?%6af&Itm6iKji: J#a/w0NUmEj][s6+*fg(~qRigkj3t./uD4t'}RR"-P )$n|p. (ٿ!"J&,EQJ ` H$CE8O9^!ħpsΗ!Ycx/kĈd^\Ko_8m9'Юz5a}i*i+ڮ%(׶jWn$ը-(_r1~ٛ1aavu6e_p/w`#S7u8(Y-"?n$IG'ُE^?@GH%iL*PЂM(LJDR*≫-ټx4sY86 z±^Y-]P͊WdV(⒲)-ltXD0VP uP# ]RG+ LRT^xv33z6E+ lcg`?ʔV# i)XWMfBFY[+Z͖`&4)'8Ap)H2dHbcqKm&.qw=*T̸fm96Uxgl^sv5Fϭ.nF>%ʙX},fsg`pC\!aRR4C}%Z۵z>p |iQX4_-<̨FZ#yj/w$]o6ȟ#s<^ٖ!Q&a̵! VavQu.~,j&K@ֵ, `UW%[*mٝ+D{&.vnkDell蹲=1.m|& p0g5Q N۞``O0RD[k+ZU=xgAwOHM_j;$YԆ.Hj +80ŴX{E@9n6$@QYKڵ0C$Xp"#IdaX OZs#V0 \ 9qG:C7(+@iYK) HOZ OaQTk9pyon?[s=G<7܂&d~^ct$Lގ6a3csd‘_#&#(`͙֑/dQ4^-VyE.Q@Wup`S(:&pw-G;@?-r _2ڌ^W a(شG*PcJY׃Y E|rEՉ\QsEyj}ոb m[5"RBvaD—z'p+ykD--{p`2J.*uݶI>aj[eYM@yE`/fU0EW.8Iwp`q[Ooou A0j(d/OFҥg3Kh *w\^g#{sM$$qzsj5G[3S>K".}zw ERdFc}%b툂 `=N+%3/Ԏ9=ūF!~~GXJùD,q+`Ä*00jDs"o +_/G: kmMZ3g*$Ժ]<;]~jPg5]u&L?1p>&fspsS>by9o%!λ6}~ ^Jp*ʼ7+jK'e#96ࢦT%.UAHHŐ`@"[;{Eؽ`:=ew @MQr:Wx}  !A\0٭wúG[ 1q!qDIIL(6GRиVH4TnHlM=Tĕgdr I Q$NPpbr˴ًd%!ynGnY_HSTm%nK gGoڀ|9npD ї 0b{ <*΋QJ|͗tt _0ZfEIp #rhcUeÎ;u?- xM 9A>؏{l5#^EERwtխחt~VΫq%]0Zrj('eK~](睁8:ZX0IK/dƺ {TkYǥM<ϖ" U_ўu{잶8T [wu?qyNŽ=ֽX=<2J؅1Z<>8nF3G]gta-ZN^g!O|Wu ZגXJ>xuoδyJx\.{ /AQcPeiV% `J$gv#Bx) Ft  Oz{Nmvz% T|r ro@O S Z,1W`X?IhЗ$V6Ҁ@"ND(sΑ HӐCR(Xm)TO5‹9uhM*WO76$O..p N8~G`hGl9~) AgUaÚX Tb7oj$)T4ςEj<·qbs5[KK]DKOJQ -꒝9kXxN7ozgsXOM1:RPEyp0䠯WP ~E\:# 8Da鳣?y[ ^vd$A8- p8lqV(9 ?&$`>}APsDѾ훯G{Cc?(6wse~NQ~aw((42"W<2 @ GX%O,GE9GyM8z! %X6 ڿ$V7@^$b?BҞzR]JG)*j)o#4͝@د4j! Ԝ;Հ^'G"YPTr􏦑WS `G\8>w*E quEo<:݈~1a#Bg_ -$2/wfӬ:^4ğK>3Z}@!-pݽ5A ZQ#"x!]TD\ PR2ӾГ 8[:.̡ oJ^/{C%*;\ d[?|+NKX ?M`[q5`斐dlj7d⣓Ǣ,igDFc0$)Oc@1INNIA)dͻ ;n2gr"Ў^X߯'0,Ȯ(U9r욏f]{$$)uTcFE,V"P( )X $&B^=RSWq4ms>qF:azW~ $~|dX4@o6)4Xdfn_8z6vPϳ&_gӏX`u㚌-B8^,75|3]j2ƖHP,%}{|WBr,{n%\XL%@{k1PȠRJ/]AZnn1g7ɓ˧OD3)w]^,vs"Z}}՚%г)*F_^rEIч/lo n.ń!D"W!$%0X-Ij#˹$PVB($2QR+,1JsTY;S5 3fr.4Cn>>PP% x QOc.++ 2xnbvk{٭B.eVrMDB\eS>rYD-NZFvxٴ읾!E(=V8»O޺w6J92@*r.Bx^{r?)!D"ԣD(z&sixs?4^^.a@ڿ-} z^oY:K̙C:a{=EpP}14ą{‡U]7 &砪 0W,UAP.Kq+:> $x1,X :)"V"UI 2TtLhbQ"_M֨jF*j}>ڐ?ȔѝP1EuawThy=b5^kej\`O.p&źjWF S^-rr?.(B-Ulz翙lZ d !$j fw/XX|v kAi1ٸb<)akPHQ5 O,,t#*Z,rYXvۢw&`\R1L.z߸Lg+ʝ8|݌EՓ&fŀ}'0l !e8ڛs)zL؝y-?Evs 44& "Xki# G#7N݁P}:Lso[iX{_ݒ 8_ ^\qL=Fʮ2K_s,rU*IW]=#3_m1 [fgdR yǙXl9/%u$ Y!>=G6gUyO|_RHiID\h8NȨFb1*,Dje!D][o#DZ+^Nr`~19 sbq^@RR&ΰg2_v%9}UU1{Da]N2e 5&rDd B{bغ& ZUB]Rk6,e ft>m;`[`k<ߣ8 bqvXl-!^znVbb믮NBޗ!3oWv #X5tiBCT.И :]q:a9% I(YGt`Qn6qȏ9ưkPR҈[IN9$Y)$L@܆௻r1:V-tZ8+6\5)UogI;-q0wnplpAjrA N؅ Q+a)R63Do)/]% J)"Z1DQxƱR%S ΂.& #t)5SVn4eer#BZ)JeH9R " aD#"L MB`G\sk"`" Ci #\ـ^]!%ɘNZi Npo†p[N}-$GG )XĔh0)l akɑH7D3ʟkǿxT4mRèPA5% wVfApzgs[Wj5>?Fb@O%kHw|Itٺs wĤ16(;|k+LKN,jEzL֡+zNM7HVk}3ՈHk|:aBմHL+}ov9kKߗ|Uo(e2<|?h[TŜ/>ͶU 00Fhc8ccI;x[.;vq84m t 2 :C0iAF&>Hu((˶O(JAD]#8S'@x9ɭ|;THhI,R ΰAƠޑv @42)!/x2 ax;be~ γ yj5Pɇ2he>.{aXeӫAGtV.Nځp= `xKXm^>{S/ǖ7w>h+6aSnn|u/mG+0dzOMʘiT_0i 4|Ç 2#z`h_L*M;JЋjb\ dos&zJΉ;)Oyz i30T$i,EV0xta CTPs~{QR0q@M֗4뇺!,Ν+9t(3~s?TU e^xҕ\6RȢs~MQ*M:~dFDn~cϑRq[UGn9wۯ9%xy~V:=CC3JwȎdw(^Z7rz3lnZ^Z s痫f]ù(䰸<кt%:ʥ@XQ]֮:l9V&J"OߧT!^)u3=>}62NH@fǻo.,Ff>|\ ~X88T)NGp-{'Ǖ@BHAC~wF[FΈGq&EwX/=uP"+dEU6;aJ0@Qa<<ӷ~D Cy9T>>Mid0هSذ%kPY$*ݿ*BAX{Oeq<4k?g訸}-̸ETuVI~}@9;†rZ)= (ze@1Ȓ7wY%y0IvpYe!%d1Er4GO[isȖ7'0>۔2.oCbak^Հ0QhY&exR.L}1mzzH:0~0ӿl[Kl9AE'$(wj̤$c $2J:O.?u9fvJRzZԼXXtn_T*!:T69:;}-v&H^xi\NzWNwHJZH|&Ag͑bɉ VWq_$wdzNګt.K"hnM5Ij{5I.ޡ)ݼrb-2+'^"vasfzp D#if&<&ɉj>c-BvD60=lwY^w569E}J0aT K؁"gߟ'yqE E(NE@뽽>BtnO:7owJޢ7SH)b#laRoR#L\)(?|͇MKR(Y|E;H Ta'rP\8+1`M֘*_'w[ڐ⮝,h^G,0GTC^KM.&xex2r ryxx\~gMH~|p_~ `{0q0W_$хb*fa(JT)O,Қ:"(I {ɣvɈ`ٛ("X R@T!̡w@ :cOsǵ "q!)䅵aSFCL-͡Y0KFR G՚D 1  .*Ez#Jhck$2z#eYcJg\ TQ/2FPOF2-^Fp1HRoigYw֗Z)o)2y!g^X GA!MNn'9芜jDc)Lr4{Ĕ(шa6G.P`$1cVFt \^{ĜVh,(JX-|9!#5f*:M KOҨp!JA> B^"eNDQcg(F{M$+t֓o>h4sySl->V˗7B8ټ?Dpz a\E0ZNv)R!%$Z:nᰢ cPEX AʋNMT yVG9P+nc0EP Y`@L %:@k@zZ OM<ÂjY !!8#e !f[nG¢x7 ,Ss6dVoTM]ziڱ[;aT }5CRpQRT1U~1)Ys xJ`?< GÇ>,\8#~wOS?X#?$cMսe4YxuXk1`\S99pٝ/0oӬ%cϳfޥ<ߖҳbcXT?{FrB_{ ~wg+{OK^!); US4(3$-1UWOU-AȁC|ԺJ[S@=XY`6fZj@ȁC Le{[!O%mmn yPޣV춵-)n -Z@ȁC8&d]nMyPcݶ ej66&ԺhL\ĺIN:6A tޣu#,S nK [9p6 Z7 ֭)JufVDۺ5/t-nm !S' ]nMyPcݶW&2e҅u[ƄZ6rm)N;_zԺI ֔%iz:mdyZj@ȁC|uN<3mʃ4GŲS3ƄZ6rm)vYW+?V5Akm"@_%W (k B2(+WX}$}Q_}Ԯ&,-$GW*mjZS;ZV5A kƘO'mj )u%韖OK[ٹ.h4O^Z): 3AHDB@A(2r-1sT'PqB#r8 !1ɵqy̩NRs Ќ> oxJ 锴H:A b`f;є($Ad(ytN{*yDaʘ%eWP8J2l>! RdQ A0);#b,f㨂5~h> 5"rHDMU )OYP(bFh % Pl€VtN ՝dBYB9RJJ#GG|ѣ׷ zrugfzP0LӗOfY_q7)j@ʤIL)Z͹׷W.4\FqջС#?5X&,ܬU?Dq(FD`v j֘J66\MgW/ѝ^]}j&4S٩;o˚*uIdh؎{<$̩z(UBzbShi"Mn<lH+w}WM(=Ũu 쩷NkGnP|틯5w >*X~\|Xa*㯗.\~yhBYj~䓙K=K8.GWI^. S('Ei(u#QN׃˫ۿ)7$qpv983\g(׫=Hp)h$P;X(^u{}Kd)4.'wy7\Upb4yq2c\t%=~\cOgb'Bg1Tp`_ռsq 'P|u~X+ѪJ 5ʗ5Ą2貲9ќ` %.v Ti#y˲^jK iEVH(ꐣ(Y[pջvlzfJ[z@NVCKעC)0ߑVI瓟D4@V(CT1EMy>;Cz:A |f B`INN'zsC-\..I#c hJM !2\Br4zvϤp"lFF &݊\sNCWPv(4i ,DwkQ{\l`6UD_5V {# BaIҏmN8?h_dzOGOs  Yw;7 ?2o|iBJe?a'ᄑTt~[U]\9b!a{ ~G_ޢ 8Lgw7h<t8; o;v:;Ww ?dmsgh.÷o~'] FТؙ?l\rMɅhE Q-L2p H|JeL/ 1ER!٤=$JTz"ZYg! D+ S̕X[iX4֚"W%m)ZZ΃WI&X=SD*xAS]ݏtYQױ xE<IG),8ōTE(m Q h3#ĕ4YLr>*J:;JZ;Qu]Ccؐ؏Rm%کQ |*ۯtrKA$dZR0W9佞$}'QjmPvs7a l9'rt*C:fϴ ,(=`%P0DC *M;ɺE츤5d1~Hg} B?stv[w|HQD9t&jBB1)9*A9< eDW~Hg%ݭ2Ç~Hg#tC:Q;9SnrQ9ZHI:3 7Pàb@uJzĩP=2h8hn-Ϋ `]rS QntҌ )lZKCqn>H}z\< I'8vUy>aR-1eu"8F;'g%e i4,!2h5w0 qyn$[2,!7}~w[ID#-z\ѝIqSe>W!QbHB,F*u TJ[c:C9xZO2"5IƪOWG9j ӯ)7As4o8~J)+W) &q|׻;ܳMzsp8dK /"cؐ0H+ A(εA)KpvUrE35҆ !%QcDh+\ RRw E8 Z2Wm WmF ǯETgi A' ُ$a U&(k%XMNXb4 +0f r}ݖul0F~hEhkvmLrt9Y G՘|I1kw,)2eN:2xStIHSBPu5&ݖV)!=.Jp_cטlƄȑ@0ؽ,uHdH!/OH#tZB`m$:j+bh0A4h&b+/(zgXF%Zl6%btZ30=~V(շŇ]l-0f _ZDMWC:pZeQ:Y ĠS4Ԡ94%湌&H+4ːX[M㐌AsMmRNxGc1*mШ#H Dk<#9 ˰@yߠ7h숒lARIW<K&y>xYS XB(zGE,] PĚ~JHzHOI2~ W~J!*c>?>S :ѿˡp0kZ&ssvy6=E8_4|s.j䨝9:l'q>xuҒ{Fx&t Z99[?UO ?CV;ʛf:;;NlEP*i%&z JHB0{(a{|)ӌsN3984jNSu\GQE$I&#!(yЋ"8ѧ0OϢ(vMo 4: 8 1JA7JG( .p My12v1vDYH NuߵȑRMr 5 T=r~{LbJdofǹlvfǹlv\-dޕ6r$"ewfyEF0pk/3T˖(YǯHRGbl6Z"2⋌+#2"9>0OxڒGZ)jA`Op0LXl}Q/v#Y;wUaڦ˂]9E"뵷e:bρ J UU>` vb`0\s~niv񪅒2E/^g+gGOS-!(y0E.Ca(kӊd#SC#SПe8 &9c/.a֧(LJA 85JD:ɰ Ag5ɰY" և%Z:7W'nᏓٵ 9|LrIGD{9~V Ip<|k5PKv3ȼ~~E`B7L4״9zWtyNIoq9"# e?pq|tnfvVX3D'TԨrŨ6 a,_.*0TWIP8g=HD0Fgb&P tFGΒ\R~VO:g09ҥ`Hv6LVk39~isd"y`ч˖gW{(, li4kTv9Rj \-\UJ4*XqE eƎygBʃXdE ]O6E10%zCt:^M&R<, :q`2؄~Nth9YfNvfd03]%oEZqT)qg%%ڲdc -a@mE~4{\}A:[7^3'F}^rJ^dl͠: %l$ǎY)1U-J*\1tw2gZNSC1$T e W^B1KJBF[8[-X"5(IA ]PkaY22(YY*?OOUݳ_>7!"Nxn};7KB@AU&6՟OC+FLoSrC*3n.sO7Y}嗾,y7xח3Z$\zJH:͞]&>ˢB>mo?_@ՃϏ .魾)V-FZe}>DF+|Rz/"K؛$> f "zK 9zd HG2#!)kiD+1u pvO.Ax,Yr5=BO,QRחfmveciYƅRz}w['N.Pl[XCvR4کАC&<4 t.`flx_Ǖ zoޕԼ< DSs9:Sax6*ƢJ4+pbNcn-jY_膞ݡkmsb'h?͛ X޲nV*^r :%f-Zߒ՚T㨪QC{ֽ}KE+[S^4MIl-E-I"bL QKAej~ȵc鬃3@ZKBuQ $ Q(50D]h'1޳Tf[`Olr J{_dQӡX*(l%M7IfSztI6'ڜksҶ6(AZ*+!/O8Y@P"ER'cE=}_ԫG_/@D?K|YY}]]ݷ-WaD1GY^w̾b=d_ɾhz89NdBNd ?yD!4fC`>ff EfL }' \iσJ8!(3ROR"$RB9g´79y ds]V,<2l=܋lS)댔FU M񈐢2\k6@,H̥qRJVK~lMDv-LK-Kt,]Ê$Y@#Ky%Ռrb)\;Q38>f {Xv3,¢Е[=mVH#a7,H;QX#{W'ʭ SL22=iFEN(Dc-v{֮yҠ.[W#s铖Ww;( *|-(IN0):ȯ$EҤ 9RIYYZ2rMWg9XM8 2HUGTdbЫ5]fW.7Ir.Y(v"2#$o^29h1`+ MC+ȪU\N% =T'e 7ӤI'e-~NuI嘍Eϑ]XڜѠ)G 1wYݩE:2o1b#>)Km٭/ݺ0RsT>.~G9ws~myW2p Zt"x=ƤdgL!d̬ Eά&#˘޲>, ~孯\SUy{w߷/{ [RϏӫӵ8mP s/w?g0eElV|FbR6Yk\鲗]QROlb[Uz6ћV$R̩0gͭS9KF Lس"NY3; %y{|Yذ8)c[f$8a Bf?<jZs!DS&D(:ʋg 8cFz|YL%ZmٕfSҳbeb,[I YL""e^7@s <#l]X/FJN cB)ٲAu@ڛs5㑂R"򜊒\ %Hec%Rubrh<_6U9CR eZZ#k6XNʜވOe<W( ѱgۮ9:^󷎿@|vxw%EAh6rz4`i\zǻT~fGGT7[AewwdN}_OВ/&nVR=!]q -qalM˗1q|J՗^!ԃzz%(%np @4w@UEu+%0UZ'Wԍ#C#NxY1G=v~ed.HɆ iFxɆKzua!\m%: ٶN6V 6s`h7Rk JɆQ]̩9Ąs:rdCo麟f L`TҍPĚ?ٛ_牅 =Q۠|l>Lm)%_/Sb5] \ݞ/tq10s?3s'OZtEZQoeƬ"YwN.b G.}W "?$iїV ;fv^ww1ֽ{_ݞd ;{ ޹|.Noknw.0>>ocR>?(HPkPxaN>PѺ7,ȡ[؂z6l#i:~s4> 7}V.ھx+_GvRSȲWaAay>;]~'A2c0K]_^Κ&%=Ac7?OԽ*SOGۡ/(7De[Eߌ|镯 'M.6LK(UcJ& ?yt>KkZ0o&ַMKUU+A5~}/N+FgO]:LEFԼkoA]yn|8W/#};4vO#mbwдcA(@Nq|A`t@ !A[Tx@-Cwtc ' p+ Pރ/;< qAGRk+񓫟'Q{}DYY-b¹@^=Ps99Dv".E郏 ޤ`}t&"y*o=16_۹Blz&̈}woն*]֍JwrgzzcQF-?{WF ly04c{w̬L0%EJ>7QH1uHUaY* :0Z0R]kXUV7s HQ 5n`VMha<긁Sb9clysa]O JinH,eS%$JJ3[>}w4Hl0.cɣC|H!Bb 9̐&`qX"R(F{71Zhc|܀@)ecչ57JUZhW !rHhU(-rrry\2q2,:mRQ*Pf8URME9h!L v'Vr^9@lmb6n]e Jj~Q,S#Qk,-Lr&VL,Pi! 8qU,W:Q$*ą]N9̔>$Q2d 8P\)_q|q5эxm8x6W_>7*K_P(d5M`Aʆ+-ębU&M8kzJe⟯fZ}4o,U~~l BNڵ$ "˼d8c9i!Jhf@=^JRDL \k@W恡w$ak[O&ho6G0+}{f>4уYtedݴ"g<*: ΋y؍Ed׬}jN L94UQ[[Rqk4UKJM nKiSyQ1!(d&Yd!ykDRFKǭ~1m{An*Tc|x@ q|!Px$8jc.L8b1mE&"m| ꦚf!j/Ik/upu866G(@TFwcZ3bI<[ m['4SY}6bBS߳p3 ŵgU!a+Қv_\r !&|ya!s-ϽZ_Q0n#[wH6StuƉl>W hTE'j3!>BD`ڠ>d1.5XȸGj-RBPq/"L0Lި,hm,;{"`=6L*#\CU GlTOY{X`Y.tB.*5{:&q1pEQiܮʔjSV!@*,55ֳ>Rv튀>D}JU#|V#쓺zWw8iրoHT@e~ )8%MQL {c'@$՝GD6wn_\(DS_'JľBnB a_rBTKž,Y}1qݲQr4!R6//,RF}iu3B^WGLtl hhQ u#F8W GR7ڴG14y \fEGxu޹K=Ň`.:ȱB/n?Vz@Ov]XZf>;;ao8DNĠ/?VI}u@h^뾗@H0!@G |Y ȥv{/A%hSF#ǬS.wM_ % TʔX Go+%dspKι0R|aoW36DMfo㗏g jΧͬ/|,d]qS<}DZOا jry$[Y=uH{!82I`.Hi!z>,8E9 }@ b0A H13_dq6ߝpZ_tzKL[eo}?&>H6êlT̓켝LhRhߌxI% e[sc@YnX9ǚQB!]i@#L^9m 2Uk< ,YTT: x6_2v'o7!P8ǃx\<_$oL3 Z8O Yƌ: Qa(隰@Ȼ;7W0JtWKޭw?8zݏ}?uLr,X3MX砷%Xp[pRx 6T3] FyHFw[0tqt} h8FC12.FCtm%rcDA(ErASN čě[KY$AB$uՁ*uwk!MمDPNq#R`j9~> Vx_b3"P k͝>k8Is37Ed/+L1NZ#9W${v5-keaKFY3-`&Q8Z7,- 0|13 ̢>\у+d2#V2.Y@@=!=;TB3ִ[hR)It&S ΞIpѬnzo/9X(P_˱/W_,k.0ơôJ:Rm3([YsP)<`)2ƱɌ8c brQ% vbJxVP`/TwZEsl$p^lq8@H7|=" VCPa =6+]u噈?f W_ݧEόL'`X>|f1QpXel=Z'xvj wG7j4-fkn>*W- ͕,p1Q'ꩆﵓu@GLEy$)d5/2vxml#5j1[%EtXSưs}PvfւHe.i=" ӊi3ɜb}DpR)NMHDHsY"Ն10qG/J|#@E uAřV:s@Hh=ArFl+$ L--( @CDh88Fߕ\ߘnAPrua BY9 H,C`hiyp8MLk;"RuWM%/C(:myn}PPQ?"{MBjc{\ژtGLDZ%)X]óf{WT%U 1CRTBMVgᐱH c3$: ![LAP+v}Y]WHv"F^i#`YѮ^qSTw^zmmH?9 &[{HCRb7IE}i%gCq$B"*I93qR7wF5Dj>#p+0 '^qIaAz$AYT(*y} t8jZ>'&"8D ) T㖃AD+ܒg f#7R=utpHTO2ˡBQ+aaϣ+ϫ/ +֛X޵5md鿂Nf U!c{n\32)U_m%RERqTI@ 1 6kߎ.1?==qU\h&מM# ꁗ =JX9%T\yr`8& vq fb\lRvy]/GB\G1>lD4V D GT@ilRQ&,GړG"nw*|&ZVg1fJ/0>rϑDA*%.aǨV" W͛A;cakG1IDA,ĤEĢϺ̫8N7,ߣͪʖ9+]'E%XA$v\;gApnǪ?aa~C Q\}Kt;¯a<_ ^GW(+u _},;c(n (ڣas-ΉV&rb23eo^I8J,ƈhcp,4 `zIUN p?ϭ3I?w#\Z 6 8=+)͙ `829-gBf3 Nb5, M2rEV BR`x8x NEp^q υkǙ//jUHD$\,9>aEIW:bVJZ3(Ha, VcF E 7V;h+T)*-2bg0PmkݐfȦeF9Ow)#g', r毹M}~쳴,>K-h^0I `S܆Ҁ-7PT0e DplyNdDms+ھ$ݟ_2X6 +H|yl9^u4|od22(aMqI7XZƂ{+939]>jjKMqGQ+"&(Rhʦ6uRZzHB  #&R"c)G 'D10'Hj |sX9$m& *u|9 qVxFXT {3,*nb Qw\aHR ֱh N~9 1P!*!AfSrtI-hT1T$vvb\:*ب[B8bwt0lF3 Rnt[E"bH@sX1)~0˜E+@~d yfnEP-"C_2l5鶐D>D)  0DdXH R=z! lxIPqa[1" b Z>8 nVxuSQDsUy^4B(\ $:d&׀mPD}t+cX+^@"пwAh"'=Q1GItc'R G1"m4 vX,Kwtbr QH[ sq}x1NHf`b|n՘hvLPBSnoʱOkNE-194#N=N@ Ṽ,pea I0kS Wg]ͧ}Y805bctKA#Lj KZ}y~ M>. H5 o~jt6_ 97t1J)ֿ.N& s3c4>ۜjA\mӁ|NQ9[axr$ӖAeξsR2>Ds[ԓ#CJ 6n؏ռo!Bbhj.Qk0ÌlVscCܿdҝmu\Ö%pJwl${~?[>0 mć#iddϣ ;hxfZ cm5 vXQZ7 ־ekb3x(I@/5;ϓ,OSg7)/){#NP2=7u鯁0#wLIwt8Wg/NgGiDb|?]Cr2]ٶMc5Pd(M1CdJWJSY0Ni|x1Ő8 W"O@87 Fl7]$.Iݕ& 37Ŵ!Ju)b, `wք & #t@9{!Mwc?]efj.o>ӑ:.fȌ Z)&)QQe}e p_`:5~q7j ܏/Tr$2c$T:xֽ. V@pKlāf󪘯>M-ly# Q?,>G;OS`;kE =G\ȝqy'G&(~+XP3R#>2QKK}*OX3N`"l ?2ҖZgq U3s9xr2+~TXp;eU>9Fu񦇯߼}KlRE0:~ Y1BQ/.A_X/ٗeKZʭt$PvAWֺ@قatٚYe~-ldlmy]R݅㝛}.Vy}3TGssM(e4G?iv:#ltc] JϺHЎd+@tfqtyʢ3yeJ*Ϡpi)PGH?*epV-b U*-ǘ\t0_~יC;Y=[̥X'Y,j EYW9(6B[دbwDQ);?_f+&?j}K˷iץKo.LZ}T:ci_@4hΏ@%atAtph+/;6NA*a숨;5i1)!:CGbpqIvx"Uuts%w Z իg}um5e:VJsh:6"]E|LE*;A ,پq zHižP>NW%iP9OAЅ[4/g]b{Wt4>jz~~O;<֣­Tm: a CcӠ76ᢇܨ:ۃPAs׷K ]qmssdx/AO蜥_i SEn}5|ag'%j4ͫRг)sݟbb2x?gˆus'l s7<5g. $1i.¬A Om?m]kuNFkp!Mwb΍4aw.Z RHsǦKP)]Ç $r4% UY}Ԅ0zQAlygZMl]Ws `%A"bqilևw2sąc I !O3&LLF-aBEgzi;dV:%Q-BQN/1Q׊wI5W/\3U+0{83ZW0q=Dq[!<~K@> 8i۞dw}\*{Š۵3tRHſpE@[ӊTRl"!K 8 DtsF"ȭUB H {98|S+d%=gFȞT|2?~ab}:c,_NWU FӞVMLAWnȱՃ;;xCpJ^るSV5zN`7|7u@Tا+sS:y٢fj(މd
|:ea߼s~4(o#eAX|<(EPpn]7Nyeʝ)ߏQpy]]"Xat?}*5tQQ ' ̸gb泞6o]ڒ$Ͳ 8mOQJfsfZ0 CT)`Қ [C=?1cԘHS=Ԛ 1p)j0J֝z9" ߙbZӣUet( $~w/_ ~|x 1s5+x7ů~ 4׷cf(7wQ~=͗|xs`,9Fon/,LK)эr}w=z9&@1ŜIXo ׏7#Ϸ_l LP9'j#]Ǎz( 2sV, '`rM}ryN>=} ԯSO)n ֹ;h+`;fRAG7{J`od+AA-P dr&Y-ſXH <@R$׶P_?x:53Q yCGE)<~n ~sA\2o"9s$bqBA @@0_@u 5wt$MIo7.5:H魃=}$ FHhE֞xp5V92Z\K6OO 僽[]yX麟lNn۲AU$w6=t'012+)xR9~YݕABDQ`Dt7AEZZ?{r$;a~p}0=}9Q˩x_D`LDGAY =ee%Q?0 '0 e%uDx LY浴TxohI;=|;1 5nB=IT1(HʈR:ӃR+ͰU8[;,WZJѩ8 )d19˙EFs'"WV0A0"LmhJ7sQMeHˉH+W4WwU9ɥprfaXQ;`$ҁ*p&toFjcl4L$ ĩ< C#%H'ȽsAlr9bm(vsX_./([%QMR KW{=k8*WqDkŠ;\ve :2[9 ok-nQ3oM寗%eI+iO=TԼ"I1TyVJUJ'ܺ6a9OwxIPiTIjD߬N{ ,~9_E$Zޜԁip NFkC1)GX+!K%b~(ĥLdLm฻/?̞@FR# m/ygxB̂BEDt{$1Ls]jL`an!aS r7 2&g|w&q㚚8ޮiy!&TɍDF M4-ub+ɶ"1:38M ) W1c VzQ@I$BKyQ(=& dOʕD*LJ:<|I[I`xyŬ`8?>hM< qLc.nkUqV孿]Lxg}!w6xw ^roO§ a_]Hrr Q?)-7- Ob~q nׂP:I\w/wR+;%6Ʋm~ބd}CeR솑_$oZa+Φ\QOjvŦ"=e4,c %3)ٞ>VGF,m $<=kP%d=?[%g?6dfQm1GݍJf,y}([4*TG2韙yQYJM/^F5jzYVevz}gr, ;L`r).fH`LF-=lZ[>vvQDMzt-(E/uz~5v;ЎD4l.Λ f&١św⌬Af?/8M>.ƏwW^̯SmG :f!4 yy*& mZ%ǚ#^Q r^,ύ@F)ĎJ,1v[+@ 0Epԝ w]b 3pxAiËbqAX}#LKiCE < D7 Ep\˜^\" fW:XG6AU$8Od1F0/ ߣX'Υ+ zG\_l>Z l?{_?y`"|rO "܅=hr8*t_M|=B$aen̿ؖ#\E^u ;a$0A1~IJk">% 00l>r[cK]7w5{Ԧll5BS[o L΍܆Njm2A\7p]eP)V}(Vyu?g !:ca)dsTfr RX2餰d i̝u)=̞>Η"JJFu?ϲYq ϘrgLSt%&}H Ii3U$Ss6Q2&E[)%Ղ f}_N_Dz#I:tc[Cy'u s:d Cg5ψftd{ؤhvVJwIl54%Zͮ]{yB6f5X%G(Bn:`^F\0ثR(NY!?޸x_食/7io;/ xy>LOoQy:]ߌKDn8%n6KI<&Rb4r0nCh#t;dq_h{Y^()uC[ZWD.ū\")=FMVKJiØ\Pov}?>AkAJp< P%nIwHDq+! ]pjG!qECé1my c Z|o[wQKdq"/@qcfs* v /q5oDh0A %Z 21ʘ)apv:POohOJI֧%&?ϖCb 4 jB"B$igb* [QWV5u28Oi(#MW^mVl<1LfIbqE#!r\mXO3TI D/|[IP.c_5)+pNjbVHVuOE,ӳµ!aypN2&@<1cÆtyڧ e ]G͌@lV#8SI"hwϑstWbzgǢ{ةh6?{jqB)tNT(Ҟ>Ο(!QϙCϛtV$fZ{6l*X[l qPDǬeBO>cGJk<  .1aQA2,DG'ײe0 Z!Q45 RB~<8!hNd-"ZIAR^T$ԒC.Gawmu9{ V2s$6ab@|"Y$%+Ƒ\_J>RPqñД[-KN9VրOT zZF%CM㒓~Ns\NWX+6EojŮV˸*]U2JU\+`o޵q#""/!5Kvv4;"[Q6b-ٞHsCg$9֜y'PJA"HN̘(lGX |u9oeSt;g6 Ί[uZHɿ 5 O0s)*ixNV l"*D$ÁE 2%փџJldá܃]zn0ũPfGZ9,$Kn)1s0hug8-N%KsmPNǘN1!`[F !In{%ײ޳"m74vO0&T`޳1c+7aȬ ,l-Nض*rmH9-T.q4S"7^ _p+{:Q E i;&1oS .g"ᣚMxX]@%ɹ LKa8`XH\XDcoHߵ; ߛY5߸W3r-}A%gB~1.8 J*HbiG i(j2Ha9L$a c2Z)Bx*] 4#g-2IoYA$Z\ص8lmIԁ~qCҞ?}wV+&٪j]˧uҹOj]?4w B6=H姯ׯ@YFBkXԦҖ00`{ QgYo~6TiJ7T!/H]FY؃-94D&./G+̄\HN8TV.ZQO{1 `ZI\oMzo/WҎ֛AnyUo v#3oFE0S{y="}lqOe;8eY ?M}2x CH##r[!0Wo0׷oGow Y.EIyԢfߌGYoxJcg5\=lQ,e mOn<*2*h_N[a~ 7ud#L4: _~)bpLNLCk;gs QRb!Mu.U^-WgV~'roޚz[u4}|~̾d^8FĊc7MJ<@DQG-j! 6cD)2(8aiC4p'SDKǽs_p}!98]8 Bcquޝ²O=؊I\Ik} '}V;Ꮋms\HbHd KG#rI^mp۞?=8ۻIv+F5IWguc&-.zW|ACP18Gq| -e i00ܙBȳ+\*:-qᙸ1Rn潜{*4eFJyӥ cT,G|~ "&xJk2c*ÔE# U -Y/u#&vAT77CZ).%FOW?3NoVB;ؼ"AR>댗pL/~۟~m'n4-4R:(AfIèFfE-=]YDERÀ!lRP1Ifq ݿ~j2C9̒z'ln^5)E͝Zj>RIOC nWB:.9w]xcu{nm:ެ̇hfMu\|W}t[1E;R5ȟ޾ݍk?$~J{|xe_y)+H*eR2X5cu!쓜e@!{~B;x~z8.Z,rJ؟=yJ}_ 25ɗuv8ŗExqeQR4x=짻/tXT콬C21a]3_O_B̑1C#f)gQ٫$&3Gע`$%ATFG9D0Ɉ1fXcuЁ0di1rۘZXYvOn&c}^RAXΨ +t"8#T8 cN-tUQeqi~jr<(ȷYWĪX!zRΘU3B>%9JQ}%I_}\غo{Q}PD\H*q%O2. #y?Oa /Oaʦ7y2}.Шu4ا+`hs ~~iB^ y&esB.ও*R " Jp DHNz- fZ?x~I@\ޤ"uÕQiCEPZ14](8{K!N|q7DŽ6I-sC]) `eHaQ}@E:$AK4weMnH02!+q(_VjO83RyFG8fCmM%XU$UI2H$2 Ah יʽRkL[ A hd>Ж::[]'J0./qXItg'ۤӫấʇ* IӼ\0˨c cA*øAJ8caJlYo$hQKfd\x"JUEԘeȘ%!f F\8X" cpjR6lFsC+BR0*Υ/c5iɹɄh'ZBN 0-9Ha,[Ruby 3!- 1K7KOY|lA54A7|t˅g)a3 ==J1뮤?JMFls~-8JP(Y@EM(93qjƃlDW.~Uj_Mbom4G?:jù 3\o{y"w#"z3*S7ui3k.DKlx`>-ɿ+~% 95=GZHB AqW%bȆJU#bTq.]NŨ:Gh|$?X(¼Dw6=G1͂\pz+NeY&jm|a_JQ6;/I'& `R6VdS.}&rg3磻,e&'8MݯABOLfT!kWnbZm|?A3`HZeDc4c5+W'fZxI"4m 58a醟z0O6!{|))&"}QTN-Br+! | ȉGm'L-#Dq-i<ߕln)ؐ2mKN<_rcEl#IkÊGSJj L|oG"]ؔ-ɻL&{L\ApoF񤝅W?:-p!?Sfil+Obk>R-{ͧ|Sn|EnJmn5YrnOd|+T娦Pe-u۟;9 8⎉15oAo|,.m YݠV68'wXZDR\NA%6]d`lle ?x6kQ]Z=8dGSm:܁ RFQO/#1TyS-֗KLe}7smWޢ*O&WɣBN NW>̾8GjQ$q\u훹 u1`/ # y(ކF )eaLd _B[oG\Uj{6bU5 \Ӕo4cLhƲY9h2`۶s{إUMy^ڔ*(a6 c0]NA"'ʖl.Y{3]n?貵.V1f%e"On>5.t>s5$/,>"O_og5A5B9@kqeJjD~p7Ą嬧+\F>e7Q7B"+h,Қ=%b6vJsӪS)kHOe>E%6$AKUo6%cRPJm-n|l'BݹZ&Zvy۠r{f@b={R[.#݌ ҟݥ(TOw<(x`oi0n#Zt!ΣN܃cDl1>7b7l@1:HqWԍ;S0XMK: .b:yI'[!^Q0e_q9ﳍPV) 8㧀2.kW<(g, RMjn.*G4f]q߲{YO XYN#Dw>MP|c.M4n7ѸDvS6nd00gDy $#ԄjnIi\;z@-Zn3}Ыޢ bYr\qY ۭ~Gy-ZMnӯX"A@QP !Z95!]:[}JiJF2en-_f/ApWb+$6xg,Yt_7~߿z0} w6e>o_][?[륙a.\KqQć ͦEcR\0+`h>3AXΝa% Lk@0)-CkfIR0c'< ׹@^2z43gJ>`Pag>I'HdOEr77)&ͨq!iX(bx(JB:e$`FD΁An3œevg] Rbя\!\ p}F{?G7WiP_=0~hHvB52Kɠ}@5cAYs)$7/M]GPu om%m`&tY 4$C̼YaufQC&6-wof"ѽ{k̼,>[f'JH{/hhz]Y/}`Z o8HUOu{twOA#US8ξYyX =~>~`K/ џ'YMs 155!0RKvP|x9'OzpUvP3&ǘx8~##Y/8R~`7 9ɾx!mlmdIKIN }IJ Is!e+g[WW㕏dU.Pd&O(2 =W*j=rl fRX[?c# |P )D&[)B Ôtk(BY>o&mIROnqL^>9pd![hs6t* mj%DVpKEѯkX>8Ub@A#mij*$yjm㙊MDԚ (<ڑ`g%6TνFvAZK1!m$".)@Q˧  U s*`0p@FuϝJ(leiBD^a>tbBCִy^uG73L],̋I6ͧ]hj Y0Ex|ŧd=E(&FC,R:=zNl-5t䅀eYOz7(ɖ ua#{ls s]E%% f._[̹#_*&cJ1_.G0p (A1r#b6HL9# @dʁS+-DB6oT\UtĬ10#*`)J8?:+58df ZZ3YUuLxӣՆ{pQ_VČI$h: :8/&ǎ6}7g}H'w>#hXE '={Ng{fǝt~?'<=vgH=7P|1oPHn |S:7h7]J-QRhM3{x0+1p!7ѱ?ž|T!u-e .o/2TE_7w_==-ߣ4n]e~rĴ貳U~q>,)kG.o`fhco]7/!CW&Z1q`Ja6ѹ\j^vUKA `g@갴Q)Aq 0Kza5+M2Urdj-0CqZ eSV+U}EwO*l=hO,vjO¾/$W*c[uۨ˪7u}?l\ BlG;GyմtХKkPt2n ^p(%ZnXAm*%O\aFW\q8ּǩVZ!mUqTɺșm7G+h|kv̝{|tXPr[̽Y?%7USs(œ'GC~5p<;ېc>9a:?{:٬Qx-;+mg ޾x>,RNģGRyUވXg Ŗfw`$l`INĒzv} XG}sE"lʊ)sgnp#.ڙd`= ,.W6vyO!URȧWuʧɦſ8d,Ip0$MXK7%Zlns*a$%"/ -׽Liij,H0w'A,_ K&ป̈́Ĥ(HMX.99?h$de~;g#%̛o78e(ꠔ`pX?K6X k.|#ImEyв79ͅ}yʄLuqofD?K[1oe$Iѓ,'u]> N/&RTwc'' МEÅVʱ|52R g:g9^?)4iyR/zy< q,S3y;!r ׶Fa~* ǵ*yLm}nz[JYbI\V6"8*bt "RH.FetZT6FgUT$&-$\bf-4h\YsBѢ!dR\+3SmdV VZX+s.75IILqamf% ٜJ ɣn7b)sl7=S.4;RC |V]\VGp<"n~CÑC DDj*4~Ƅs.DEP.R"Y+3QH-t$QԎA*Ȋצgߊ j;d蘪+iԲ|*cu-5i s'pܥ'I=K巿y ,U`& $X$!L"s>6Z_LjF̖Պ8όo0^tR=#Zg^58dThǹ%{HI" >2nr@@_(4jIKzŷGge4 G Vض&IAk7Y'܉Q N2"]oٹԾT9ڇ=z7&AG_~UztS:R>v;YsQ:wZMv *ژh'B`AɊW~4{y Zd; $!fgnMe\U4ZIJE06+Zet~X7OhA4n-=Q07JyOZ3eӤM<̚'}4zѶBW[%ohQDJIexUWl1{Q[Kyz{ρ-Vj Ydz ^;Gz*A{S3/^3{FCf8l˩ s{th.k%iFʑ:>ʋ5.%Y }B ҁIAL>kzcȭT\VLë;:Zd߾;jN;줮ϣTBPVX'.Ȃ2xT҂8%;w}Q]_ސ›騧 MԼ'v.2 ::&K_O 8pUDv_H &`?n~.bv柝0ܖ>p lK׽xF^7ڍFQOy1qȪ0 Vf˥Ze7 k;EL}VgUPLh2$@,xR~FWU6TUS͙p;cOix\6nSm*nݠt1Ur*0mceS!GX4!MI}P"Uiy$T[2)[{)7$&#鑊hA+l(G15-m|CAjIq21E_~O}4i-xݙv&ݙGwv2x+/#h]\h.eFue/wXAKZ{ܹgpA; ŰbJ(w}NZw#uZ!e-0;EfJW+y\l Cm{wxw?|V;mrWM{\.yjzݪK;"oY4;o6CGռ4E.}bvմi!?.7*,. Jەk3o_l|z9XuB|/#r>dzm8VEߧ[K>? >w@ #ٿ"eg0KE+}ٙbvwؖmI}sqe+n["۝|راxX,r? *64C4~oû gnCA3t.j&rͻ n1C4Hsӻ醮?wLctn"i\Cڻ ?һc)dNٖs|3.X9SOU=.'˓{GK.cRhx?eG`$pgK2!YrOs"J+KWm}sFbk34> R4RVqNFJ ?04jS @ѕS :̷cVky2q{a0cnKS-sb;DWN*n2j oL7Blӭ; :$m' !h"9یD%1ur1~.'o-R+==䳙>SJĉ6,p(02PY)٫U%<#*FT ]P^c-b;~m3sʕÑ[^]Z:-P1L[YBUV$UHy pxj/fb"A(1,,b6Zԙ{돍fЃeaQ3rYGt۬ŶBoR6A"SbkHHԤ39fg"SWOt{)V(b'4E^hp^cMp  8ΩL:n'ܤՔib%VܱEfϾ/b B|^Fbhd x4E!n\fN;&a%L (˲Ȧy8mΩ=˨#1Wb ΜG^ ~<.~3n*DzAb>z.6w>#hN\ii Hy@ARKP ~QFZ` ;˜r}UZ^a޳bAg5`+g`)8j nPM 0 yv.~_AR#HxJ#5x\`]+k_5GD+2F=N\IU~0{`O]w~ ` j겎D(^~5 ]*}/UMZg+?M)C!Q%0ⓗY5WxZ>~g/+f5իiDW/DT_ַOkQϧڦK&2*&PQ_x5ur_W(1ɟowx6ќ/|4|>3λo߼BGd7Y_Ö%+|NnxGgk?8Ԣy7|rv C{\ytHMשY->@Y`gaEެ}fno~>J$N&AHZukx !IrH^(as1ȧ>dU{ϰ=*0iVnF5J>Sw?ƚ>ߠ#8$C]3rZD1,0_ s*+*ƕmc:* v2q@DMNv 0yUɸ$_g6zgg;,ژAGpAnMݡ @N4 !_v{j]OݮJ!!~lleg[/tYi0q6\DޤjG͜\3G>)â;w*|q[_2h$e4S)2X c)GhZ<f;#F21JIPd2 L;OrW@f"RgU0*_I_(ɍŐ(_|W`pO^'?ZW-k$Qu0\)eT %]uhCi2J`P(˹rIpY?§4|ٯH42].  "Ϊ( T|K攐Bl6 cMpu`#`8oPi!I|T!`cEI"&hwgwzwZdEn\A<_+ԡ@4qV&+:kt뽹Y|kѳ= R|Fh6A֘)f%erI4*%bc40$~ovBZ'' Pŧ,Y+sƅ~3kz?c4m3.;͕ˁRGLMj8&t:FD?e>a$덟hWwZ ,D&D! ,``ּLPAc@7'w51F'?i"uaa(%fڜD\RwWn&D?D=N8њt#M͋3WyB:v}z>hBD):$GTc5ZO9(%R+==ThUq@HcLFi9r} 1(xPKR0]M&@xPP7~ߏ{¡Vk;(NAjd^qN^p7#Bݷ=84Uqy>1]<錬2>`}Qh}fSRfMEamey_ v kk3#K8Ej\[tn6TAQe0B=o]u颥Y#{ }8h,iQ򩯥3Y\&׉1MYBRc;щäjA\mڵ\^] g  2]E+`/| 8!úΟ7_Ezrsı5O=2 (R%*UŠ*qr YQik=;4hv NPoM_~郅,ɂE bEyC=9z-] TV1FuUUPK⊂E*/AOYBPJWJ#S^Gf Hn<–QUH`,P*RmޯW‡DIeե& і X`xKEk[_{9HJY}G$` -P!<,sOh%¡ QZAgڟ8:_M؆׷5OĄ~=+ V]=@ݝtd2 qCNdiNl OZ5vAg8(z]0e: NY VkFD䮧!B•imh)f1&ײ ЫU0ڌ\\%tUõwgcsuZf治 Ahckkl(hu#Ca1!spA]> p+4+99V"&E(JElCK벍/!B:mBl Znj=H%|Vzs ?2.ΧS&B 0:$g*؜HJo Fs $J:EnW-)ea5UV6;4S ݣ{e,i0_2ʂ;-AŒof$0oh d0&%ԍӑZ n)%VLWփ!Z'2"Ҩ.94r QXV'*rJ3IɃSfqczNխY!YYf^p*qf##Vg`1')G0DʰIcHy1"@ Nb ɠ %TBT#bO͌S(cLz-3t48qVHGOG^` Aa1X#ɍF A KW.-gơY EM{S1 ެ %33 Y'IɀۿM'3=7_$Ջ5W>a <\&cFհ-?cCh)MQ-{ Ј+0}B0KL z?|=4lև3xqo:^<8yMN:q5}8~ښ"[{Y{;{oݻVfp5?ww}~]`ou{w=ѯ 'Fտ:ClɏhC`Yo홳Cl)Vad-\?&b;skbR81隌ћ߀c(*1x~}ћ;'d9lg~ ?m}h{Ýv^n} ɧ3ݏ7߽ ϮyquE&Ή|: 'e'{/ ts9G]Sg*7j%!1RJbCSmIRI]tA+n^a+\}]{^q!ުsWI_Z/gN\6vtţm0n@ޠsO8~=Qmo8M">~C. ҩ30auT{LwF?A4rG:=kX|' ONsxt{]xeݞǟ@si<~Mn#$ǽMH=[R0ɟɰ x*6ҝrBY#V[(S?j4Q ad!,nIX_X=u s!p2H!Z1STdd[`B0h!w(^k QLJp7EL_R0K}`EDlܣ(m7ZӒ<\~A _]E./&|tOb/}| \ayU2SA{$2KYʨRFeV̨,d,X~/A0 lN,Ex#HIIh2`Ϙ$S^ˣ2[ EEf(f %w<U*<€ȱY&BCaVIm,>]kB O)W |]`nI' o{8LH 0œͤXH@uas#ˎgɐ, R0ȇZ0ab7%0/NW2x@H/HGߑB]NO  ܍07׍~V*rxswaۥFdT\ZWrkmc1t__r5O9|z m;r@{=~Ҧ&tA [0 ϮlT<1 kr>7hy`\~fmO|P5Jm:o2G fE00- L\,Yr)RdEA9B1V,p p~Ē Ǖsfĭ,%neĭRK:-lL1"k&%N3''Uc8[ 8T\y GC`$ x23*h/ Sy k׌y]kx;IbRSĔJY2g ׷j]D⩔D{ne gTLwRFX]bdxJB(Vl0]i ;Xˈ*+Ee& $@#v+ &b&xrfh)r&/Ѥ5eJ劈:fUǹKY \X}q :\8 ? T_̩1yUƚASfy|~j`2MB'J#*zF2L+"HH=ad0oqWs3kAPIqai35RL;hPNR;aT! \ea!+PpJYtE}M0WfJ ŀi :WZ2b ۳֨p5f7d(PNMuE?xҦUEBhPEBަ2qoeTEɈ @rN9a(H2B-'K">j HT0~9t˨ Z?FbE]y{#(+PJkq5J X#0* H`Tb0PZ|Rp@{"R_#}P`6"D$%t%*YE Pm؞`Sڛ+!osa0($+`7tPMl0[-W-"v+R`ۘdM.!F*6"L ̹3i &N)鞒&"Bu!N2 FE>> 5V׮'l߫asʭuVLd9[T3Cƨ2\eJ/,!` 1GTFދZQHY͕ȊONx5oQ$紉œXpƭ_p3#k"=MLQQ뵁z֏`(tٚ)zMH4 J!P592UdDEo8.AFNMph)?&p.Ըu[?Eָu[׸u[׸pv.#UG B[\+ˌJns֤B{@'f1I/1#0꣗Ikic w SJ3e8،u1j9y@ PP =Y֊Z(Dd*|, hȈH+eR,ON?K(]*# o|2^\\%"ߺ{6]Z>}ku^>8x!JV459/`j4;HD hᣎi>M`B&oX`31\c&dBBF2 .›]]VC5tQC5tQC.[`pp$BggOED"hE"U vY4,gE K@6%pۿnU%EcY%x?ݿ%<^#0fٻ8ndW}w*iE f.bYdG~RKQl Y<=fa9EɕlPv P؉M( TE%bA 8]/oqE.vw'n/M!yzE'HD7_0V|+-RnǰG~u;4샰ŠQW0u;Uٹ0-ۯ[ymAs"Wks9AUHTU4NkSgkGFlg__vf}(}9ś•f#jCDQd'lE[G&oĥdM ~b=f9 E<$s4,gQ=YTϢzN\$'C zó a4-(!/F!CR}ɝ1F[’Q!T tC_4,}B_~s[ס [VƉ/8qQ:&mh6T@\3L 4n9 mS \>%;|yL(Ńz/@(>_~lyFkj)L)I|loP8rٵr t B^jYn BCB.Z]G5X6]6!b_(:)v2`caLY"o/"oyE.%~2>P-͠P! =Q J{VZ.`1{Tc#|ϊgoOw7+xLVmw׺-]V _ГUI@/N:1XMSI3Sk@FVj]!Vl\lv2!\8'>lY"oŗ"oyE.2D>{VdB'ԕPjm@H=1{aWox }/N}w6ݿw+&k Ҹ'/\oՓ(\Yj*G0.gs\\2N|YY,hE>$m{zmhE.vѶwm?a'8v7M%G[VBQaQդ#`:v o$Y= n{txU; U٭7E;R@ҿ3?~/?}{xp I;үD?c_=pM7^#! M\o'}sq~Ðq>Dk ]*&0* os}sw|RH~/bn8(bx(ƅC})Kq{6ozWgBR_U͵^_gHI49?A$1'M`dP^KqS%tx~\cNliѴi[pE@_#A#m=yt^,:h,ߥw]?с Z"g_-d'Zo>`^M;*;StahOãf, $j$PF#X@611ƚ RpUbs?7I.[bs Ѥ&y^,ܛ-$ RP[*Y7Oؘ$#G'f A4P"-y„7?l~>><}_8*Kû~Es GϢmoC sb etYZ*Vɿ $1d &WZEZ&B QamHb LN:ZMqi\9șh8pޘ3} d@u2:kMHЮsud0׷QQFgհqIHdTщqmR!Ub)ʡnc\]!(kB+gq,8ژTB!4Z伓q;|7]reC^oLN%y!; ǗSK7ZX@a@$ٚc@#ڭ"AM>: 4cA0!Q&kY+tKnr' 6b青0~HOGNn8!)_I"+aT4ɫjYbQ:m֑nX < GAؒ\ơsĥXvA59ӻ̒ӣ(z cC=} ʶ6ׅ;JfsKOf"%Kv.S"벴Z) #,~-<]`-=IޡWvhNʭ1SXZ >uޔ@㱖,KBuʢVu0& K1&sl,kIt.ԬʚE3IB,w A3;t8͒c&ݝ}Po=38`N[[yIQVRPb9DՃ.¨i"k C1ZWs2d_ua@n+#)l&{zsl'ϟ`>ɿ=Y-+Yrd 3ipOylqs,C.8?Z_"C$u5?O=BaF?rQ&O]l 1)yvR_ znsKIH7oMbsR1{-"݁J%J:`{p$262}[q˜% E֦%dAOO!K ZDc^a,hw"',M( E$Z󆃌kt(pրt= HT(,t+1B{ .Ɠ%XYY[2qx(nG6*Jw5 dki'fB AԫSIBx2J}SXTDD%L *6,ZF?( ? 7556Um\[r -!H2 ;e=_-p64HV/g iƋ6s4y 76 kJզlcB%ĮC! @;~;67`$ 9tsӘ+_/*ˉEX];`徴QpffՆnDC&^DFk(:DlKд7ဨCSUC8lfYq$L(߬,܄K>d#FfH!0 QeV\ hٚ,"y!\ I _.{iå_% Qk PkQ$bQk"}c5h~֓bѸ`ށFrdۡ l. ! J=΍15^ΐ5ZҘL䞽c)rf42ߛwQ}L+< ~9iy_eƽ>²wLҭ$IeTJC1TMLנR K|3Z=;&&禃DkIy8Uc#gdq =T>خ8rQ>PcLpU8Xrз)`ܡF`_Du~Ou۩s̭62@f Ԛ p+%1T*G/ ]lLϢ {6uIoiORɽ&Xv'_6-<UEŒMܨuMwhQۃli7JڈƢy7ד~ #F@p;N3̞; m({y{+ 7Rv,Eh W2cQa޼f ,qKXJ{'XeЋӣo!I— Wt"l6>{XL_*o%J-&o3# Yđ$Ӷԟhi.R#85.N9Zd_4Wؗ=lby'GǷ"dPQx /"ʈ>냽(@T#'P)b˝QT_݁qS嘂CM\oc{%2a]Γ- |Xf=98`(A'x+huJHn7%k}rbQQq! T#48'uĒ+ p@jm|jQ®UDצ4UQOpPgsdOk*}gV"XO;k 2A]oJ㍉/vS91s*pcgE-\B%*uRB<$"PnX8TGr]lj^̾E{-R\="/"hb͵QF#FЄ,z#u8qU#%NǺ }!a{ua^'nRjۯhE.kCA嶄Dw~FH SY6R%PZX8p&9ZAsfvCqFVOV.j-v*&B=-Ԝn+3Xr#,z]b_Tr]QU?RS'-  a^AMb(@ Doբ1-׫k(@zE f6XRa\t+Ď)Nkb~CX*p}6%Kl*2ZBxUm|@R*‘WdxEO69D5&}%Z.3˩!j\csإїδ9wcn /Wy`\L+K3ff+2Iyya3@/0 D@+>"JguQz_Z=$'TGZ`/T$_j:·C9KQd`[l;7; ^ek%C#5n.bdJQB RMΰ A8\H1s!  b #UZ*9 escawa>_໒SXRXp߮>#a#s֙L:iR>0.1Hw%\aϊS;n8pA[yё"Gˈ}Sll/)@$C,!ʍi-XKT1 LW5r1Q1~W఑'FrJ 5ͫ}BrQ BU;)"/bz] s0\X΢q&2=79˻<}Ǧ=o[P_8R9a, FD(2\`DÓSk5e O:B{]R= Oc*8ؤpe 6Ue˰ &g)Gby/ 9 ^!z=8r Lt-]v'6%kJ&! VIJցSWEBS,L_p|bƗy9s$ٝ щys4<Ĕg\kṉAsR80Xf>f׆ɖxk5T4xJWc^W Nħx(0=\FJY&9*@k1JzE(Zl^Ss> LOW'^hXŌ)\3F&#;Guv[|lg OtXԘ|6b[ ԪJM5vrnl uho[kT9RcW2Z}X Lo1G}%T0쭔YOp 13mxçCdjg@H24B,x v*?1?5r];l) 7fq}q1xrX\RdE@{X<9-O!W/,7sl 4MK]Fr{/f_1&eQ6֔k|ۻoNTaY1d,w& ֙Л)\4+q~sIYE`I Π}R7 ke}4h t}v0yؔϿ<Xl\??Xhtn $ hB>!33jczsƣ#ݙU:qO:>e @ /@") wp@" Fڡw]B_fOM(vDٓXrwc?[4"s?E? GƇ)8|9mvk3rPpX}oNs2$ MnLIi _Mk,! Zۃe}KyeNQz{7q6)QX e\Ǔ;dWn"3}uvOw?uOỤ,hQGy&(i(&<!i迗z6+?w͔qѾtXQ9Z3Ϸbg9Cg6|e4qߟrЬg[߀:MҘJҎ(ǾvLaMW\Ūp$ emc:{\(tSyws-VRլ?q\czR$m.N9Zė 8t6- bo߮;Nxr9^ݥy1g3ף ` GX2G+'7dT|%H '#m+&bߌ{J%KR/cڶe:e3ޚ٥D7Ѯ&)ElҭYj<{\s:8Ngs ) ]xn!dJ%$2ϲy ]M}`J?ďqb'f>a4 O ~6fVYi<*J|p 1rc|A\p) 3QJFTbFl~6/&W wFM?t6v 8vEF[-GNIhe`g1X$S󶟔6)v~jg4gݠV[wwɤ a.@'u Q+c0Js`lg @~fi 2&jXRnIe-^hp696;-޽}.C=2 7oUH*+%T?ÕR>+)hߛtÄ2#DZ=PW+rǂ3h"(ߞXEk) MY|wν5fP_B6/$l3q2$l5|aVDV0AʝkZ3z!|㪩W ]՛CX'oKb嗹|CNtnMףR+TjGk !$ZYlmv.dapI{L2Ѿ?FKQGDsaN2H t2:dh)Efs##ٞ3Í]x͢+8a/7׋nƒ8(AX u_>]tI4W܍}r2W,L[r;Yg ތG) ^"Ӹ!O z Udq[9(2 25cY+Xjmg392M(JN) p7߀PɗEq.wARr6!LSHu$k$>`h{$%B؏/.5rGmAe渵"1g4qJp/ue+]}+J$!pbB^ɬ&?$F,995zSxl2iX!1 C|Cb@J陜bOqzӶ+2#ixi׷>3aΩ.f@N'e{d_ϧŵ[gN@fo]/9g:0k1ƆoKMC6*_Y@hʇ/ࢮ] Նtle&i-*ⅺpd '_t!˩"_fN"#VDν4t .$Z`z$N$%VaC*2LߡE,H%,ֈ;]I+~$˟ @ӉI(-8@UdiWMtIyc}I|bcz^юX|pʈRFe'GA˜>+Iʓbc Pcr9ЖݳO r C[ؐ3]& xg,cI%֜aP .'EC_wT|xۀ} /Wyˁ׆o!Ќ?wI\ nkƲ^F"r'쌚c#Ku-B.YaFcP.]=5c]cw: ڄ띌3"LNF H3}iw;}Ğ?ְV*]p[9{޷ǖual%iz%[\W9F7EgeAvK*~I$I4|i^pQ̯m:h7%ȶas>ѶKP;h5S+>Uې1{cF4mc.䓻s6F#hl3%O?}?~x?哋t1xӗ5qM_D^I2dw;. |魛^LV{xZWR+T$%QPmp0+*0{Sֆ+t4w%}O= LPѕ-=^?CXԴm=IVuľx5ȴ,woɟ=v6NYҤXՀ7*2jW^$3"+BR+-Aޝ9*MŗdCz{kAdOǻJh|HUw*"bmɠ+hKƇ䙿y0pC^AUIW= W ;ysz~*6㠤ф)z"!ւ9_4z"}8jg,b`oyFf.y;%B! ~~ Ӥwuύ_a%w@;3[u*UwTνĵ5_ʦD>g@I+P$!~iognpYЭ~vUֱ+z eok8%{fF^SYl7\c[p&揹 ]]^NW]U緳\{;yſꅜo#s'}4db&dvpF.(kAp/,_pf,Ȟ΍N Mz{ =&xr{@7ϻ0ҧO.Ml<#՚R Pvg6F JՊHThnK_HKk\ *+3qaU@Hii&T"Rps;5^337r>l%tgΞ)Z TSҟ k!3|]rl~\d\۞;~~ sW^?2U_kwfThj#Z5RFk&K*pa,|(v}(tYF7!$yl&Ȏ`x#Ԕ7f*CA+IdO"|oV0.rޥ G(9!J/e 'D }Ji&pʞ_ q8~pQ~&]Q%!c£νi A\#MbihdkV-NBFZ;maB'J*qJ@S%Ar0Νa Lgr&'D0C8h:ͣ|hz`h7MBWU}sT6Bj4^}u2FkB_Ou 'U> =Rt!:5^餩KUhHڿ1' `ƴ?xsLG ֺVc$h7eGECy$r*~1%vjL>B^5w *[KI8(,) G(,7M)K+ϼKmnM0zar:/g)Q"n5/bqr"9@+<"Ӭ1LۚG|ЎS7\Go]sYF0>DN*=g ȼ(A*<iXT"ZŃj4bw==qN&QuO*oҼ_+.4- )D>gP|8QŬY6䁶'b؟͊e^D UJօ_,Md$|ͰeUTU2uͮl<&| 54B \ĶiBe$Ŀ4➞DK)d(74ؠJ#HGKsՌn5<"(-Ń|z+ k D(/RD_揧|:jÂ☣k~L(t;u 23Uf(!D'}kq&iܐmhaaq,-/[W l>*༽@fn=ɭ0\GX-9 4ˬު@0HEFKa N(kD-hml},XLaQ H ei) tN* !_a8TUM#׭C;(FWc7_eWwzt"9F!:@ 4݃݌iI2/TRJcb+u94@|#ws^ vh"^r:rrvxn2ǣõfmd;-V$_5kW2lo=-*ag\ d 2qU 9DC(VEC"2pa1?z@:JN)o) T-yx*Ck aj;B; uM |\b"۸$Vs_5'ϡ˨(.}R!:CKQmIV3SƄT1&R\Gn$<~k*ρ2}I ng{t%rn!rjY(l *w+pb!)~z@%9t0x'55VTCGr ?l Xc0 ڥԦ06tdiD荨:Dr F-HyB]߬T2nl=|-BKb1wEJ *5Wc V}j`Ùcdj7%6t} Cvyą1"Dx!8\HD%w?KHy֊LJ:W%F 1c~`E.Ca z:^IPtl!IdGc5ë-C*nfn#ѳ[Utӎ5f^,gv'zem/\E`&;κvݾv ތ5 :Yo^y{CHcmN2D9(=Vc t0z @ ~p6`[AcՋZ\dN0f(Y$`=gmVe}q]KkX! 3Jt[8xpjA` %E1?2ϗ/ͭƂLfd, ƺ~yHG&kpR}xn']#piE$KY,}~"?L6ڽO oB'_>|~gnƃ.|˛μڧϛջm:o$YqUeRQ?M/ѻN)." i,.}re uY RDm%ά߳n"eCu cѽEWӏ^@I21j']8qhKx6^҇a+9< DqmG9BլQC-r&RNi햏us 7wjaP{lp5FIaTDbkƟƽ+Z#vOkMK\~nD>t: 7Y6`5nC6Z4`tLux׬[OgN':b:^Nd6∗ ..=͎5w-mEi^'IӠ9-0,)eEcq~3UtFMkcF;] u&Y#dl2Fg6\Un %%ƊuvcN.(UoEZ>ŵM72J"i`9m;у"( 8E-7N2LLf8)#e --jܵ L!\sRi݅DʭV7} {<ZNUm3#_I8@PG+\b@0` !4 yO Ǜ^M>>.Tb}m(FoCܿqy27grG!_.og˔`:~i-q=34CD>la!2VgQsunl5 IH69$kN$(^uҸ1PK>\M3uM6C^mǐ 6-,6 T߱:rBpP,{+u;+e3IL,Z'VT'V[b"_LDWF TZDxE"hAJw Rv8ocL}b3m}ĥњs)H=*udU3`kMܬSቤ'.8VgŬ'?cxϯ}cqc⽇zbCXk~`=Ԥׂזyhj &Rgٓ'Z _ľ,hz:1޿C"dg1zrG.BH]!)~d/SkVr2)wd/R;ٗIEX a(2qyT<@3CA(cnc0X#wO#B1q5<>~ !aM\q%꒣9 y{%P ɾM(Ynr/vFʌ܋[cv iNy#x҂5uy5B3%f]f~hGw: 1sgֆ: AY 1Hg:D!"A퐋0`0 $,02"1 (iw>y.k}x/CM906㝎8FD{)f[4ڸˇ }<|& F P,d M! 9.B& ː KAę E4ci+Ǧm$v$ӇMvbNiߞyD2&fM93g[_:N|',:|iK;/Qw6ŬyԀb{] kN}N}N}N:֦ h,BE!化6}ir`L0s[?zD,t4zu8ך-uS|#yσNF63pmNFN-ZEGO*7Q T«3ݞ0LG:f<]Pk8$]|6gza Ԏ :#~{6|@/V^y:f bжn;>SPM[qX,4m}³gI.s?:) Blr ?qϚ}gd`ۺ wDQ_|}'Ax0Aemawsˏ-k G~𲮵LyÏH \L\烗K( p#L^doҋt4&Ff0xy>]o/GЅ4?\=~Syh^w W#|^w2%/v /1w]uwIN-ʃm{Չnfj'unqߞyvݗ },|Yzig}* zYr!tBmzd^z1p¦ : d¸.z6Kq 7#-Ҹ0~sRG ``Ȗ'=&F 4@!(C*bћ4tv=?P ^G{ pI<>j3^Qdi:ϖ OgRPWQ䆑֝?߼ d^vnoGMV۸w 8v!TGsLqrE?BEq{ [{5yu@ ʯ~OwV ^/P/N'C.eNGt8~/.8ݯCLq/8bJr>׮s;ƃd-ry1h'' SK{ym1Or6 E0 96>CńOc#b e]ĭmx hnf8U%3 Zs)00F#xdzbdz·VΗZ'dCyX<[Eo2@yNn/\7N nkkkGq4N_5NN]<PCQI#͒Q Cs!%d _76ߏcovD>Bٰq"L(x!1 4& DHi"D_i"M@Yr(1~@YB?)(!O"¤R|m%ϻ '׀"啋de'ic\D $ Y¡%N)ӚspG[[ ŗi5|ZYArF/%Z7& 6x%\\B$_n\=fiJ qݔ# TӥJfdb5V&6[A pZ{r+aں#BDVFݏ dL.(5],xh';mƳqPeu\S ?`0·K=&]\~_ZbV5K胍xݏ%^T<3;+MkS˩ * f\^%G6٣ meEoad%$o|Gn6B4囁aN 3DFWQ ƆƖëbqLՁ6p"Se% C"n?r$lr+M]5i0ѤE30StΧl^;_~>y^^훠v|<[gfbju~g#WA @ .4~I^IDžirtfP&~w෠;oO8;~EIQ*P6N޾8{}2)U}VԚ~Lm7zgγ \-N>>y .{K|㉅y`>3k3;g{5-+,+|^"{?4!xD8"L'PjxCD'?,J&J%*N8QI$\EMup ) -hx&lw4ٴE,v64&CCOؐl5m%X;lm¡Cp?72zTnȴ6T[٨ ,Xn˳h=mJt/ݩqFwwFY:|vnٟ޿?yeiNγXջ^h\}Cރ~N9pQ~7hRDeE< jCM[)X * 19k1p&fm}ƛ J~JS}Uz'Jl83r| ;MfvvZKbƈb;>~_$)FJ{?t]'ܮgn3YytZX?ZH!4"B!C#wjD$P#A% tvіtv`ehڍغ/CD}&"+#O*-c"Q\i;:y3p*CX$8bq(bpL1TsdQ6!67B $ ʮ_D%x&2>J"G( I hB+V 2‚R* LI0k%Jm:\D#!}䂻cwdbZFhA $x4CP.+ks h7.]8 d+ ~)3u=\9a(Ĥaw܎;viۃ AȆs TD#<1-NGsA|(iQ,[V{:쟏3߅z&|<+Q<B&\,4"3{7@"O70D-d[OOK-i쉂$3Zb|c_֍a2lmY9<_FCCRJRR‡لs ,*&7Mu@ثr%c?cjѥɎۣPAPLpn=":߂G*SͺpgMtX#xM`͎#kzcΪgi꟯埉&*A?7QUSvTGT;WmH#uv٦-&) Hb("ɉ|7("(w.M]o|fF,R $iP4xa$CF/^/vғ=ep(@Am"(?)k>RuV%fY7jYQ߇b2T 6d?B) V5_u%EH/_޽>飘}ܚ!`g4M^UM 6>"h|bHKˏ/tL=/tܽCJiVQ[p|uq;6]-[q j mT?%SNɪr*HZo- G1` »,t07ZIuK%)QDusK-[}kV߇oMaL7 qxGwy$ֻk2scsĊ}iBw_\&:cs{$•N8&/f@ tDMޖA__x?#vCoJV?W%Ozů^SCi\ʻxA۰ 1 ތue'3eSO,^!rOic+t{lYWOoX$HlPY;wj=M~aLŗTf/ӿܕTi_-*U~g/{7>SCݖ%YxJuiV6~CM~Q.go|x<;W'DUbq!$KEN"2!YF|/+8# GX~PC#T`S: i^:lhD KVRwI#JQI%ÓArz1Y:tfAB#}WoSYٮ͚|MYSdM 55@ȺdRZɲrIR?6ƊM^AQ$*zI^&)+C,4El AM5&ay@[6"TMP!5)QN%uA8a(%&rX S1?Y2NDB/$TԔ5lŻ-pZΖ]&DڇNHr H*[BEt%fKȾ^f Q Wiz]/ԪnMԪnlU. 1znw=^TD4UD4UD4UD4" x}M{LEZ'Ҹ֔_&]v]ٰcz~tA.JGvm`Yyt=^zhZzӸ.ѐՃt;n5[(]B}?~]^i±Qo^]p2{䬹oƹ(Yo"xP+K1k ~7LsU?& xuŸʦw \;WS~m 8z_iȎ*8w=+nQ PXH!3ǘr*߯mzpG)\}SGgq;=O$~p5ʛel 7[_V354OըL?prk'!%M~I[itmnBairեK_`S ߾~@2-Sd9֬y]1y{Bn=mwgݜcqק +͜!-r'khy@G=&GsຮL[nfj 7yDR:iѱ:UN?N2F;iS>[}pu3{TvݣֈsN>\݌ңN'fHkm}~qx%=.#SEM{Łh001tA7h}0`Ϛv_ ^_ߘޯY!{gfyW4p7pI[O~|܂r'|v5A2Q @ؐQXmEΚ(fP 'V7I#AaJ\ K(1>b MB1z*ny+.K}Ȗ_(u).}E;9 RFIdקf5arJ7&zxSCZtݓBGzd,F@-G%BI.dk1-z`O0n']wN)U/~|fߔѨ"SYԨ%@"L;] tR%1""R5QR`DX6O!^SPA;'t*xOYo'*՜"p9\ r&'BQ,dK6{T6ٞ|ٝs=lϿ~ݛOo`~tyf̷5r?s55Z.J$~T+c+q햻t"'EXM7t۷۳uM/' ޲ ~uƼ8/CםOz< ;~nq^<"n6o.ft5r^XJ*B_ k lLL/gz]R Be.=  /,g!:3eTڰyl$/{RO-j-KԯЃL_>i~6=;eaN'b90G6ϙ]` EQB̀mݸɠn?L7:<ǴGQMeqsg_N?Dxٱ񒱗NLv&DD!z_JL2B)W6*;a}1&ن"3Ate漵 I-b,yBg9冢ņUTA%A( XLpQ D % {Qy4i>KB r7"K6^{K2Q.@DZ1 ^PpV:-0ANK|R` TQgJpo`Bn:g̷Jhf#N{agز2bIAߏ!k#QC=S/ 2AkR?T{P1V}-s/tr&86#$636 c9,Hg"Or$A=Vj\Ynr Dւ} 6ZXr%lP, DaBvxȜ21#}5?68@ LR=oہr6FizӞV+{}Om[])Qf:Rзt?v bioe'ou_%fٿ^2@춑ƍM҂ꦎ\͡W=XWEGf5;DyѬl}7*ڙeKQ(<]U\#%4 FB< tSM]yO+zpgʵJ{Ӥ th$kFmsfMo5QzHZZJ YHoF7r5;VPZPP t-*Z(RՒݘjw论f̾۞kMW-+'͉an"ŶRpo8,Ӏ5rn }%wcʡeW\E9@ (WK+f=fk}۔r)ɂ=(gPɺJ ʮВK @eu{!WՂQ0 KMS褚@K.B(طNcg;Ȓ5fjD )ڤc vݍP5C0}Oٿ㶒:yZ}m}č Ҍxqd&d#h=f1Qsՠl6BERt3oswy, =gbI$Z\(i6A:3R1M,ɚSB1E`糜JcL8/Pዑ'P %9 i-ǂ7!Ӂ@t6 4hx (%5f#HXQ.RrٛL xɔ $ et):NyEʊa!grEf=*P f%aK+Yك^1{G>1El*.6dte 1ep(AlS/j=7x95$X x6q- Ql x)*Hҳ@HV^qrŴp^.󴷞 ìmۢM[03:m+5ebJk#I{vvRߞQAUS!U ē\Ly)={kȠePTsFؠdjUnhIngA*6h%iZM̄gi&ѬظF0`𮸩ΐQ7kδW-5Ahɽlzqc+rTSCTHTKTaR7uDRlaE $|jTg9,r%+*eTwMumaM=ծSM f7@ߍ&``.5) >{s6k*)_^ N<^ ("jX^t\t"@I:x Y/-Gn8nTg^ЖaEN tї..eA/"bVSLPX`T/eu_:lT]q?Ep[ALѓC^O~ģA83L+8rK5JoHcNz F,%Y1ѫ#՝ &\Eꈗǝ?Kn3Խq^0ueo?~HѐW9T6m^Sb TmzGNs|ڝq-ތBx kF?%@\`8-F^.W4J6mxZm_olvۖ;d.4эRl31'p:+]3lQT m!{zEGR*ٝ`f+N -ax ۊ 3lvidEj*xqmtQAIT*1(ƵiO ^_ Y!VgB>AYV@Kޢ/^tIacpUBBNMx7#W@*<,_oL`!&JBʬy Erx|Uj"a͠I ^kK'?܄5UVq15kSnT/֥f+afkƘs/e! QG*uxYDFO JHJFXc.,5`9-K. :!93M6JT ^#KL`Ʒ/qԿ*j !@+8`)51ȷY/U6VN{ dw NkZy-聤q7_6JN{N٘* fbAx?3ѳL[M7z#?|t}?]1׳w~d>+S=U2{Y3*Lړr_& 0BjPIN`i%cm,uZN:zB^ 9Za#5M־3ƚGν&Njn:0B';}ViUn~'; 3AW9uSIg{c 2le|&?|e_dr*Ad2jw^h6txuwR10iuw*t2&Dt.oba 2? mVӘJ̊gB'M4&qSM7' dCʵS-h{d$VSj"PVh.eA:5.}}f,]++ɪUp m20__w}d+[Š1FC|o`& |Y% 3}}I㜘5{ܧ>؂ki&T7"DgWWkJeӑjw&km[-Vlޮw9P{;^ Z2 +.X1ƣvqJ=sY hҹ) E(MBzB%K*H<0В:-|1YQj5:@Qu8Ԇqn;ŀ"+(VzK:%-Me7db ZKk$oRRK @[j%)p|!0D- K.5<NHPQYȸTG|(ė ` 1%+½[7#CSVi! ibIS>\ZdV5s.QMv?43oJmvƸ2j'jNHPJj*u4ݠ \0Jn-DE4ƂF hf9x^7b&GKmCUv{ۻNՁ|]ED`5'9Pg83S^u;1`HIM7 Ez̎7Sn'OG0fDIkehyRVdk͚ > @ON2L\75sOhNrf | Z=7Y.E>^_b`-DJ=݅h-{0x&;}WDA|::Q!B3BIry )J4wSm9E |i= @M0x5QCu>b0 c,z[3O[ס{?E?L0ܱ#7z?{t}?\1ѳw_~d&)ئv'xU[}nh~';]eTE^C2ZO.X'O*FفY\߷εǁ7 oEJ ?6`wG7G{{P ŠPuF?;C 't}] ?Bε 117߹`}'6a\ /wnR%m hgF>f %/C 0z02\XXrJ}خB0ew29Еd(ᇏL\ Qf ҅$op]드7h}%\B:)j>(C~"]>(c8+FAדZ_c %"-pIQ@h0ƙ>(UsA3e0W dz1ѱdє2K ScT`SRq1Np|; 9]2Hvj,F"U4dh@,@aL,i.Dͽ%ܠ$5ɒ JNeH|:cNqk29%0>f-zgP{-916[&ccHXSMCaH &ԣ$Vm|zt2g^V gt҉ռp"`mpY'POBzOJ= ZŅ2Ord !ޓ8ncW|`r>C6i,;L&- %vW]GU-In'@h.a8aň(ӳG#]\PLQ)奫LU)ZKg%6 D$/fB;6VBp`9 cs]Is]ukFRZxWf*/jRm8B`|&P.i®M&b@ NsGwWgme1Db nx6#Y8]US+GZTX0(1:0]9]bl2@Z@"\ ^H" I W&[*=.!\RpE}T=mPsLGһ'K)K8C=섩+J5W"!C$6팮< {i1` xŎ)d&7癿˵gJ́(3G(<$r6Ykj\R.s9J% gWR) JL5Pkj|LH` #j|H#Y?dNo#ɎaQObkfF}N!XA1v>>Rps㊃mhREG )k;^M)C8dBGȘ'cQY"FZf_JEA|bRElp)F4@A- 0XmN>1(bX`.z[m+Rc 7EYTuYdb!B`\*> `ôN4Cͯ2f)3utw}!N)Uۜ;Q$TK֔ U%Xvz-"@W|(h$/6' %S4A9"O&(_p,d4" >ST/ m@TJUB R<KO5!6ERvQP+m%l-w;gRSNPpvT3J,'&RKꋢi1B:Cf}wƬ92ٲ:Z X.F41p({&w״lE+mW#u]Y e %exKmXAE!m<#fy) )Bh\ Dtl{bQ=XsxۅCE!j]#͐t!U7 1|TD;q$㍋%sc kR zk&İ 8pg8`dQ ]9aGFtiAH@⬊Դfe*rz :X!LGA'?6-fbHBxkq Ut|c둅ᑅ=~E\`5kt+IUt|cKw4Hx*;/H3mL9/Eh#A zxh#3<+ARqqv x꣗HEzՇdE"'+On ȕsx]?=lc|b1֗9 82s߻ʓp{,:Ο/}xi\7b\_X+uYwOBJt2v)1ϫ&B _xc#gr/8:F6X,p#O#$GMȣՠB̬c:LY(F`|B߇W0BŒda;°S+> jwgeϐJ4 EddM#3NAb'LT X$᧬5VD Gv8 Jg^ሣ)C_?=hc8߿-:;g~M ñeGXz׃vN\fW>~z|0z^iK`/+ "Q{~՝Le Xp,SOXc[jһ;wp~AXn}P~rTY˘3(l^2'c [0[9*z!H"?NJ d9pV=(N!xcVvEn]3CH 01Y ^"?Y>i/KnI4>kʶ3ճ[S$SRkGQ ;j%"ZbbHkۜPP)奫LU)ZKg%6ATJ r*B5c,Y`X ;%QNoၠ)DKh7`( [t|d!9u]16͈P~#yMU%g`Svb+d':(ܗ5A؛xPp/DvQLcJV(čvR:pӪ֚X4@D Ŕ+R=ӈuM)<(-{p@8I> [s_6g`xb"YoYDnmE{['E] ]n$1J2\PaX|uv:cq;aJt?JJe@͸ 9[y{]>x o8e_uӗ MHfSu5)@0DsȎq0]3 )2OwCݽx Oc]WpRW wځ+olQHk+KnCQGG9m t'\Kz7o6S9s/yu+7?͖AM&YK&'VF%-,eJv\8pW唀%7o0*.JcR1a&-V[ !I+m6"Zl/ZjbԛҔ ytqb W5*'dsd6F%0(JQZ8 E7s6׸5#^ Q+0QuvJ;OR }/R@=p'ڿo$Q/#%Ǹ,^+ٙ!@gcyTM=7yqJnWX򿒞p"(_y K&zNI T!kMh%JopPQQ>tDsbri `ucU^ٕUkCq疠 c:`=Ek4EenowTS+D -l)TJQŸ־%Ѐoc1_h:TJyPCTkt8{#yvWy7FJjaПRvQ29|5 !PyFЩTd*8uO'*-n-A$l$F𦟜\P/ T5. PU$*XpuMΎ$iCW|UW&~0Rm d7v2"g'x=qL02Rz2N)&Φzʺ9%G| W>&bߎ8ֻj̲F!89>9* 3$8>\"xEW}.o Q0WIDh$+yY S]jbӚ6qd&AJn86r[gCx}fldR31ح?HHu'YA-l?T:BkF[t]Gr`#V2z u~I|gBVI2ѫFo=<*RF2+A4%Z?=k#گJ$8iyuU2C"\;XGGA/k58qNY#!wٓ/tsSZ7]Z6qMc}Tb7 D]iAviTNv;-SI&re ߼z*77R# |CKiiޘ Dt`o CːҢO%y.BW;g=VY=ܼoSA;/jRd1pTONc1\'"[j9K<N,Qw%3s} v®*+QJLeJԥ5:.fO{a|4i0?v!,_Ũ)h>3/2!5GIu|?=~x( JrA^ jH]Aۗ/$3PV w\ʿXSV+K^3++0鶺z9qI˳K7g'prr#r#2xL'm7jԶn٢>wu8?ZقVzl6x]?_4vWՈ8쨀 _+5V<^k+W||Ù͖^@_n'Q R4 _ 'wcɦQlw;| GnJ^Z>#'*mh#;=SKwOg}>S6n/,f:mUATתz?~?׿IiΖ->mt:MAB_=.;bC8>\HF<'3s;o67MHVv5dcĖA (洪ie eݵXh$7G۽Upݧfy3ۑgz8_!FEb}$l Ub" I9߷zH{gaⰧj)̃,0,n}a^ke$7oC3]d)assZ<҈&NSΫ& h9ݥZ,N 83C^[V z%|E0nWVw"Y^Mj%X5IZߦ@%y/5ZBr/k=ҕZ]7#n˾uN@]U_bTy ]iK@&':ŽD#/>P%Ԕ1-_v%orËm4n틹D-~#ܒWd &h m[_:^5{1*qLNtd- iq)8}WNӍv=ŗypGaR<2(XD4L`Xg}ֿI0w8^~x-8ZӪ_i RAJD%?"s$SÒkFF+'(E׾{ &b0|},N>aYwJ"o%黯<ð9# V*ȁ y+JͽBFe۫zQF4szֱ1jC-zOpzvsY?qs^q|Ƿ z} ^FE|%d "bw|{%o]ypeL7]BA[a!Ւ5jed5 )޵T6.&_PiU0Cݼ&sR,ZσB^xv&Hf#A:(JT)BϮ@!{4&}U_P MJiJLU_颤C"9H+yMs0v0Ƈ |z/!,7!1v }#>C A3FBN;Aufur@kUI<ܷ:KC>pӄ춴;w`xg?%>P?Ԗ*۩|g=|Hc0+d>v1H<}幞#pTsv:y$\-![砮S(}Ŵ$1A䥊~ ͬ](1nC2gpO޴*}kU$@(^R"q*v"԰Ij z6*]u gUR`#!0}a^*pc#ňaoQq:%nxNO4Aʢr\U/D 믯+ZeS6R^͋ywσO<ư) oaɚ'|6>֌t_9KUa>m`!??GQ9>UdQ͖ݱܐEGUjJ[`Q"ZY6cj:UvQܪҫ٧kvJ~: 1D2Yl +!857㘬ؖvJ0MvC1^Ԧf*yʎ/ZK$c)i0B2# -5gXޝ֋so`SSyjJ$X@*bC*gR+__EX9_{,rt8yz:q6mJTb1#"m2dǑ]Ŧmrd9;v19lcLT%rKC 1<^` GL{)l1V;xhWbb9YW4ĢwEdWG\0%M ТN2c؋-Ur`ەȿOA (Us k wŃqv0T.VdaX_ Z}E석 x˰_lL$H+D-SS梭̏=2A~m68n޽`y}nzozc[~X_~Hzv(I|iOX؁ q\jdXqX۫s+H%1 u0יkʟQo[ TFWTT[Cy7T̅ζvE].6W@BfF&"ޕ79DIL 'AGO(* q`X4"ư$(&O_*r@{#քoׄ*}9 z(ͥgAk o _{C@2@(! ?)[&rKar5A6Yi( (U(+XN6pRx,8^r3*ƃZjxmϲF,~lxD#N9-sU9d9 fa U*j=lzM߲#x*f΄$K{|yM-P%LFNH#2˃2Ȱ:9-'2_F=iX=t)/'.D5{~zFSqʐKLX׻h^#ń>tΩ$pW٭&OO"쬇s[Mh[-*%EK->Omt-өYʁZhu@7CaicCн|q|Sx 1x&9Lb u|ұS#5 _QI[g7ʹ(Wf)ǪSɐ RR:8+Dj,MqT*V_oXh2#ϟ?=q+# &+@ ˜ Q(6cw|enJ]xgf}yЀ{5kEpU^4;U<{^Ŗ3Fe=Tr2}Hә^*mqa8δ^?)M:<-QH Ms`|&go6~%5A^[QwӞni9I^$W<%O5ԉ1ro62WyQƨ|ǮWk^NUƕWBv6mmaLw:Ǧ]ЦRj' 7*'VI8zXV2MqmTldeF*$odg^|G4%T^{+knHnkhyOF5CR wg56 }4Bw֗YYYYy]'SϨ\ rM. 2JBa^|e1}37(-F;wޗ]یw͌0-Ot2*,\euN5=?OŅC61TxU* 6* \0V Q*q[$[  .@E3

O l*ގmOJm0}<՜ ES9`ak(;[@_hKi-Xf99KxҠh )єXRYzD# VT8).>8fP^b:+G\U3Ut4(je?T{}~%p*}5հX204;,E%2R:ท$Ac>r%7xDzR*-Nۅ@ .+=_54I%>N٦3Yه#* ~zJ홙؇6.荩ȥT>}um7h4Wc޶T@Cf|X3/U\j% /h4dy3S.i8+.aJ@ Dn[|V֌c|Jt1-5mA5c*ԃNr!GS(bI\"/mmk!qhl=KeE5!hqg<70B\3%3䭔Us*5*"$RD ۄWRAוV!sZN+,-GoEm!<)C)'BPTEV zjѤ)^ VӇ?|22%j4N=HUxZ2 z4TR!NXR 3U+T+)?ZvE~q\+k!lwdVDO/M/\~F!D47G2]]ˣ nquW5AEe$'x <0km ]KZGoi>?U#[77WU_Zc/z7rwExx\"zq{9{|-LS L^N/ynew]e;*K-x? R@ۓ99qhKaz}bfeznߗ-WD 1Ja"<YZ rE%JGђ:% "j}Lt/5&o =hC;sElV|1= 'zw.r0TZ= S(5/ihw&sw] tVy[hhNrnQ򂏻?@b߇O>~w^]aήn6? Cyӷ@4ƲL9>w^ /eסL;nqwբkK~=j=&O0xy/M> &9b$W]'(>]oD3(vh9* NX(7'{ b݉ 9$O:xw=|1Ty>G9$FtM՚x. ߄/qYߋ+W,qɔRIZ* JVz5:_ oVA #!Ғ;r.($B3Kn8F1COܣ >`O)`{I6;F:m]avu>/l?{|N0kQT :)Z(HxTd\ d$zGaO^T(rΥRKmkpvZ{9 zi.R0T^ se [(aAXHz7E O?™X[:jSm6>Gc*J̞h'hJ;;Nսie;i1*̠"iZ0%ME@t4RFna 4i 3(ix*FIS9] OJ(1x@&rVi{$^e0*xɜ$q!oy%)z$C w 0Z5,zx v #ҩ{P<̕HT!2<2B!4):5^z6A}/fԢb48ՏÅaD 6ȰfzHND7rNjք 9VnK&$T]T.< )8PR["=- BGmϊnͫ- 5r[4M {~lN|} x~$f[oËsZ!$Sojtez,5]Ѝl= xiyݦ-Ԥx|Ww<?nU~Ռ*F~r>V9Ӫѿ_ 5<.˫َi5O 9c)?N<||E2_ltɊg4ſq74|ƑB'3UDËSFy>C@GJ sǐq6oNNȍWbQ^7pepoix6$C\{Y ZB(M᫪C5ԇz@%~jh9n>ftêѺj:cU7>qI`ekaЁyxV|ٿҶ#_kWO'IAo7_[oni[8xTL/jGEī:*( z2sT,D,&T 7]ԧ88Z# bo=.:QP!ODN 9Ԗro5ޡ  R44E+0 ao7{ >?F`H?Aï7$*hc/z1(LP^S+C`󖱄RE*8b,/4PDBYRo# TV!В,"0B[É^0ɂa #z0">=2I(HY =P3\q+QH %pŭݘ\^Z}Q!'r\+ idvYL춥d)Rv1O=NYfUm&S7[g*}h ZRZJp$3%9()k1&Fu\" &JKaV#Gwr^kivf ݼ%v.|R ;g4M5T^FNX,8OŞ;ܫ1ROіIm7K=-/.͘,O@瓗t\3\ i:]Dґdh:haN4dik.@HB` &1"Gb 2@d^17:r?Fr+uONvjÏ>)Q`0AB.~h}X(!)g?oakVK(a=Yr0`O3M˞3.NXex{П6B1;toIԇNӖU-XDIT lԲ(# NZ:TwodPAWyK}_"hťc{fQOJuVמw|C,Ó7%)μw6k:z-{?{ƍ K/9H_T凔Ro\M `$)&);N*398 Ų}FhtQ=1P=#1:=|rFuVyzac!UU}.ƒ\+ [zp0껲Tn ;.hT*ϙ 뻦3%a횶uoȋ<R9\~J Gϳ'duCVd)+I)6B&˧z@ze+,|K0U7{p> S5v6l^I~P+W^!B^rgƉi3f9͘RlZJ0O9,! 1VXRp:]N%rz:Z +={79j9RD7.sI/>}:So*MSI,gPJJ :59%ᴡ<;BWޕy,w17wɡv#;jW rL/rnnuBր! A0<~c%r-1*AJ"1HE)bmdž d2[SYqK20 L08'JhxUlra~,Y׾NWM@jݎw.򋳋Ojz1E~tb!*:5u%R }z1KOLI!_,m=(nJ^*jm0`6ʧt~?'2~(%w)M w _J9a%-i"ЉnJkaAS=P054KPiLyȔ@/%JaSl7ߡN2D^)v(DAE 9환h>s .-&#p9)'I:ڻy hhvLW "o\y"o\Uyj(3n :B QS@/;ٜ05\Ɨ>=́R-@݀JU֊ Na{a8.KJ\zs\j;GFֽ,ӝӸ]%kw?[bhj^X/k҉PHCXEΩ[9=oc>Ja)'bkf H|HiPV*F'"0<906L}W*n}7P)9+Exhq$"HQ@"0)4(Q!pPj&,9:+_f~}H4tO)(j}1;GMߖ0?x0b Gofvе.0`Hp㕩:i_HVaM vbg{m8(V1 b TT!KEk}/?l{1yKnG؞R^_jLX8RE>E'~Hq.g5*~ӏ?yK^MZwOհ V"'=l^WQj!96;I)K^66vIN(j-͌tg)Fr3-hg(nD#5gu"ؒR B!mbC"R ,H)-Uݯ_]xL wJJb`rZ[w"wyRُpz'g#]-\,.٣btﮟ48¢4cf|;nw8Er ާ|Ӫꆌ%g JD<-, v&BvQv%;GQ6Z҈H!y)-7nU kLNI}Jc&Dv#c7}m${@AҾh|IJ9 }EOwF_30v[yjU$ovv$Wm]/Ol?9;p1hӱu=s:ޗEVt g媎mX&\ 5騹Qkkӹr1XVayTeQ{\zڇ+rO SkS; Hm#@h\aScQMTZ7$E(˚½OME{0)R$_|RӋP_,r\Lkw!DRAQP1L@H 2& XfSL4h(CףV\κ Iް1;|֡`9OGCn2TC jIP*R berڔ\BhP$OGp@N)f`/.(`7~s*l'˕zϒ%G :s'a6 @&~['!ħdO P&>hMVM yVMxeA5%*^h]~葭 q>+o꯼l/Hq" }SJeC:C( 4 eYJQ̘bB_}o_.}&ˑU?x:\ {/,8 *1|h{S=Ze4dtƟZ|q.IxXgL ӓ>OMdzW71y1^`At K0љ`ʀM ,0-|4W= V e۰&N1ځxA[4f~ wh!g=ȕ1C5Jfno0AMDc4n|,|Jޤ׎abuƟ* 759,~i껉Mm_~l}<2g w|?4ѽ{23@)AqVWo\byZX'zx3Xιg~P%u[-2moJ^>[SwF,_;mۗ^D2(1e7?ҫrSOA.Nln37՟+ q$|U9@|vgJJ+gm%Y]7"!Scb Ȕ::#49+BAz?ͦvvCdqxrt澻P_56SqN_ 1q`@>rbo|ח ^8ƯsQ71 웇dzu|:;{}_aa P 8%,E6/}5[ F<Vgr1c5CRar*dRnp+,XCR8{5kN }ZPoYaauVx8~^ 0q7ڱnJU5ZP J2-@R˜!+"P9c$2̽(C @˜x{ rџ둺Kmbt2s_LЇ &P(CF Z̍&m>?_y2s棨 Y#Jc+:s7BG?IMh x_ի:ƪD^Mn'>yjleNJ(rny7YsWya p/fJ71F|U_/խ-<6j/κ{6^ݺQ|r3'x̅S-\{!ӖE-Rcx 7;\rKD &筛DpB3<6BW!XKֻ#oOwj돋m>a_D,'Tɂ6iR A?>?~1aprH3WNôs9U>pe#7 m̙~"9&#`L :t|;kogKK֦ş`%"ٰ7ߢ$TR${\.y{jH]ϣ4EZ[J6VHhy*h6~(Y6-A{S t jaI7Ц;c%CM4)ܲz177=6*4̹UUh(Ev݈腄T#֭ Q6uSFpCT|*#Զ{e'w+' d2ҳpXf8`\%b`i"a@POYRugW!lq?햟%;%Ƿd>t 6%$VHA"Xi!ҢS7 iԢ;%}T+BpTMj(98҂ZҎuReZX0 M=W^-m׷1= zEt%r~E"ͮkr}!Fh9kS(l`||q V|H #D3Iy9- *c 083]!}LL!ؿ^W+A6uc>y*HT>nW6ŎKcTmfJ5[J$41B H?6le[UV.;@.i J;b]r,2jI  \ A>).ꭴ4V2~ FAJM)P.ǞEHu5*?{Wȑ c0K!Ȍ ,`cl< "n%J#J,7EXK})2ȸ2"Ҍ_Y ~"(;;xǎo|1g=/wZ|4'p]i>yh$d\b*/R@z݇u4W㡄 *(5\O489EР^E Sǃ9oqO}CշE[iiP'eJXaF8|aA1{{ 7 P1TJj֛!u:bǺvj2_e' +P^k^3:Xc^o j{L}T Lԗ躵Bw;sy&6Z>Trw gym6eyśr{snQxAM|?5@VW7Ǩ)_} V Y99h!ێܼ_5 Or,Lj8YʐbdMw̱I}g{5zL"+ G, <688/;B<9_A% Tu_GS,ЊGOACԺi#Bsƥ6Fn9[Fmfa7+e F4B&Wn%DPɢOq̉:ԩ׍:'V6]tR U}73;Yt%a(Uw/>^ޥ)۸\Y!1 w˧h$jPmw2E>]3 H Z0un)53epJ2.28i,*T6d%gQZ-ʑ+ 0PBs1M8c_W^$;]I *_JؽW L.yDѓ.mG%߰"X^ZX}"N}ZܥL;nO"YU$>gU^$26`%# #d 1;]FRZL8cuNsT_$ɷ$V2!17 R~v7Rw"6V7Fl$[9{j~{?'/6Ivhl`j5, $A% YD "z2jcU03yEc355kweCZXB mA΁ܔ븕4E( ٩MN:*CG ";2 f#y$2I&AV cYvf_tU:*ZmKe2fj{y>X Yq$o]yt,~js&*.LIA7ouNlsBX,iσ$Zu̙MV:_.icn$pdE,뒌SЎf)ynjSJ,ViggB[-pxd7 <{o4u6l0$#*ݽO)I 40*䜙bjX$َ$8/C`@xCzƒRR ɠ$8' 3(K2h;d 3븕4,K< he-#-P GLe#^L:IRɼ܍0뽧*J2j/06kO˕S`SjXi_t 1o-|D Ԛ |8#rPm$DH5'TkM؆zZi'Ys|6DeȾL MsNzqKA"#9 $ J[qrOϩ o*LaOF0>H~V+>TG.+yyZ3`!l݄h9XTZ%/UZmժ7 h~?&jw+Fk*wN|"G1/×@&Pw]ߍeE4]S-!7mM O gJHkf?}zcKhՊJ^d?$CԢ>[KR`(hڈڊR vNmfL;.;ٯFۗU'&JĀQ!ҭE^TF05[;pn{I%Rq#4oKP`ޖ<^ʆ!3o_ "w1fۃXy[A y@#ޔ&j0Jsk74?\у4W3`jM\\01ǣZ+yGM ֚@30B &[5ˆZ*;o~w$"w1fȡvp.e]N!3wBLFL4bIG#{DB* !+n|٥ԼnΧGX/ϧOwżzŧvII 64h첳صޮO{e3|/ Y &2t<ː ,Њ2GFnJ>׵Z25Y/@Jv.FKE>ԲtQKP޵'HU,·m␐'N N7% h!!1J_v?8 8_<5_>]>Mb0gƩY_}2VZ:t???<᭜JJ(#L_&[oޞ b3F}׻|u?=_\__.ߞӷ1~ϵO{~\S{nXe4닀kdRI->y.JQ==V}qCXJv~!I(l @`yaMY*L P%m$6(2Y!!_Ӓoq)܊W.Fݶ98E% U*=NW]UX4`N`2((LhCvK Q+8O,Zɍ OP0޷阰vp+Ұ7i9}W!bH;ԖOc0W?y5?}e C5Y]}98+‚D695)9ISHN #%HzIf.OFD2X^[$=G͎K]ʡ6俱UedrMpc$ 9 'tN'E5GVHgInY"$Ee2i<BetB7$ã[u(% c,r44Kszgӌ4f2ȼLC%;!Mcu,.,E\_l*`pՋ=x!w$EQ 9Ld,C% K6rp;څ;Ơ Wh@YhC,z=)%TaT2Z,9ŌԱUJt jGXF.1B ȋ0'?Z 䡡3Aڻă$I~htCHmQJ#/*$Ew' L>69 aN/xV3+*вA×6M7 puvGԣ5D7ߗWo'Ջ _F2YO+L?WS|;$%k^X,_\3P之Ե(ME=8P'D9}{^{ҴEnx&["Eh5ֲ(I:Až˴io9iH؆~FY 񌬰5L@@k$IOlYM!^tརQKIf_V˴W˾Uno6N{@ 3LH\@M {iޫ3z6ե4X=1z=Lb}ip^0.U-i,gG 0pE㯡Ez| Bm`%k:g_gGgt~ q A0\O3=dPCz-6x;rJvYh{W?]pdCȆ&7~2'V~kz6R8zWa6$Qp/wNËpxZD7W[l-!0+q3{&oЀ#0ݏqL<$qk^Ǚ6_74" cin}( d)]%Ct +bG&S#vFƵuH= LJz(4undN*YRȌN gz쮗Ԣ,]ްXiP>VdNq>RR 4ުn))xi4p:1KBƯkz܀=lxP *)d*Җ6,P0EM6:"u'*w\y/Șy"iN&0"YБ< PNJ<jh#㘒&EmmV\1!#HD|V Qa7'ɵ텰Vʱb}5H: &X'X ($#j|q~5\NÇC{AN 0M=q~5V@1\~$:  *9߾iRNhjuT\K!vE~~` ɬBAٶ<Ve;)-uV{Jm!&06TN.~Q1|~>]gòk9ݗO7nDy.>U[ 哅ۧfG^,޸+7:ѵŻs>ݤ+}T߻_>.G72HVIWZGqލӼy݈lGԞ"앢fgnDiՑ6h ]VIV tcV:u;Vv!*6}&GB`]~B(=P씳]3+jUz#j ZCW5ޗ:DڋVN5T`u#vܬUߒ4e EfNٮ^J+]MbDIթ>+tS#n$Aa鉤ivy=QsKkmaXdtmCd scVV'E%}]0 u FɟzkpKlSO7MW }$5"]Ru`Րso(*ӐNBF9ڝUǷ͵K#7p!VߓܖEO1^\}V}XOqk'vw,S zzh-nƱ mnb(k= dReCnKĐ/KuNB 9j+vNN)k{&ln61GEdYF>ݯpik mƚKhFMX.sW`=?f# $}H_)5RzR~ &Kft[/a[uQ*xglbEAgڣJ!%|iM7q3~Lbwhbm+e%9ĝT3 <۟wIΥEQ)rSbT|6>U?)k-G&XK&`vxVlu~9v,,NY_uT(R 8BYXA Yyk\Rp.$a2Ŕ}&d!.ŝe[+3jz#ozT/g$K 3Gy #7 ={JR)n-Hab#ZcRnnwZmp9|z M¨!ţO"? ? +t?Aӧf(nn$d}Ds(ϕc*U%GXmxǃ[a5yZ$r#9pe32S!X !pC*PL+IՆw39m=?zzԁvO#pr'HUNDpƥ֬ckl$Q6ٙg𗒪.Ϭ;N~2<$+.3Hy"'Ipy~W?Caӭ!3oh&#fJ]i%kVw5JA<q ~ضII.aٷsx3R7K=j젿.;WV]]|Zic; q93;^X. a;sqIKH܊v_rzT¯+[[b[i~)Cc__޺*rhF%rNjZHtܺ|J؅P-ȇd]Rh9ޚ 2%x^z~hКXՒSJ:"^h~ R)emﳖ0tʭpQP5mAJMIRֲZ+X*y*Eɐ>b梤>>TVA+O(/\$Ie9F<1ˎ/2 \2*5tmb7=}mjF/dΠ[ )6=fdK1MQP9AGnz kbٶ |nzМ73'"TyJFO.C^mJx-@_ W[pkE1"Yt!rUSsAn!Ŧ˘g7k&HEWFp-(.{hjWPI"q˗5Z֗3ZO9&0l ZwxnƄ0E'X׍xPA2@KO0n&͔lhJk!Y lZ} 5ڟX VD`?rKŖ*^x7tAHsw^b:2LhuqRYwAӊܚ3_j;YEto^TaRo B2<[Pܢ%ƲҟGbWgSp#{6uԬك.bTmjok/f6̗W,-~H Bds:jP ZF3;[IR8u*/:` kj0ŪS14횊W]2y冷{;BA}6'#)&6yFD pU^Ņzҡk b'YJ''wʧJqSKDmG qX7!G*BSOo)|>g{壽DJ [aORj>M&} zFNf<%PetK: O·&柞MPvL'u/m6oZ/_G }ab8wY-y&,ש;S[5^=7-2ÀhC0beJ1w荋:<.bS.CD]L M.lj0ٖ@e<}G=NtvrPPl[]vd-^ouZ$]vmqN uSfSwSDZuuf+8bɼG3 9O̪i"Dbr{3Jb֜o)E*W\pފlk#tHæ_rɕQRK*BQi=[(ո.ڷev9Leb!4u7(}(u`.ͼD_{53ZN!}3_MKgbڛQ×pWQ+ea_6k98|Wvl^Tۼ<4M^IbxÇȈֳ5!)43&$iHV'(ad0 .=& #ť|hISE%]|:$oƧ6™V+q\iZ[Qs@塀:#55 WW mK[Dtp۳oj#wC;@vx5i;;]9 wG&іuv>^)bm_bouq1L7FvlʜpM7k5PT~ [:h SIΓЖ*1 h %X5GÃN Μ-L6In-O u]jV ;OoVΘ+ϽʠH Ƃ367kYA. )C/z^`np>s:w>EA<D ) dE'\&??~pu??|Wo)u KV8&`^1 2\H QQ2ng{-685 BkCnas=}mDdk+tJ6iٗabk =@|ŦF z ]U`I)/=E̗s07f1ݥ~6 Zy+cz䘞|LSAag_#V%h1D֬Yx|a1XxՌM *di G^>z:h`*?_35 =n/-.n4z(21zgk@co~O{vWn(H˦d.Rಭsϑw@Q)ny˗v筓u]D4-# )cO]ub\ JY g16VmTΪvlF;j\Umպ,*Ta8t޿x4}: I0]j"Nfp2==^Rн,_v/zS T `j?7 y㟖V/F0Y,0)yy̿+l9 $deop4](83L,6 EN6ժ+G5rh(84|T,;ؒ 7 VAdNj?f̼, \Lcೢ@[\40RO$0(f 'C-5|"7N(xkr-gР4aPx3ďǽ714\oKlzUt0ӫIcsbzc7iގk:{fo90T~P*k,wvw_f ` ~- S4 $qq\=<`BRΜ@ %$PHR(e2+iH**/')R>UTiW}=5jQ/ǯ:|+i*iMȇ qBVwyf}<9w'ao$s\ gW0I}b-ԠiN{Q)1煞VW:FЃ(pp$&E 1sQRϓ*Ok,t` ㉋`U -g*Ϫxk!0A/e`Ӗ"x]9$[iAJeʠ@=c {*'uKcF2rRG-{Q)'Engg)ʎ3*:e0jVLKB> OTTgBlc_S"`&h߁ -*r\rO{99kcR#o$A #.i4/+bDHKPr~`'x sz S6Db|4AXe`3Dg"zve;CΡ+^UP=&1wqlAr sˍ?9<f+ 7W|d ΠϷ> oۿ Lfx=?}b/OS0E/{ى{||=LoHxuv> p C}A oBI!-H$3aJ!,es"9ķb$+Aq2L!m>u^B!ސ}(?bcO a@5 a61I pa%8[Abz"!g|t0R% Vpl ,G伊p`   +eVɔ"nQ"p ixyĸ vXfSᙰbnaR3^ w*MGZ!L6Ý*pQ5j%En('+QNVDde)**Ɖ%Z߮yBhP9ΪA~ewVn|yjy7jƣ&Lv/m!XU೻l US#A2$ lE6W@f,}d,Aes?z=`cdI9 DyKÚ =(sfFܗk zPKf9ؒ6e9o rAnw0v,,Gst Dq8>MN`pvM,|0JX`VIKg9>RˏwLJX4@䑋Øi4o r?Ag0 ~ab'.pmYz/pҴhqҢ1xȒJJn%e֕/h,R+rٙݙY"( JJߵa%"YvAe*/*%e_K71ΊҘ/ڴ[ŶT`:Jl∓a'{=m#qggv0Jd |&|ޝZLWS+T^N/S5.!շ1!Ϗ3f#I[ d6ʨjOa\h#\v}/m[*XE_NG\oxG!ׄN)/nRKjRiYVZEYVZyKidTUJ#?G5/j EӼq|Hn ̿XpAZ&OJ4']5f)<A-f^40ox IҶWNs̪N&qtiE؂'97 KWjepxxu/Jd {05$9yɻHbq(e y`\/P أ"SA="-}_sslq|JG{ڣ[ A9?]AgTV%r'lN)5Y„>Qav0ۊSO0X4.qu$,p9%.u \P`XZ1[q&KFrYplKFJcߛ <teb gޟj5unМYBm4(7[7MTdt%Ow)EHptAqCC0L$_jҔLA0̝dd.0N~&ڑ02!bSakWf\B" +GL;t-Cd}V#gzh9$/t AIY1,K ,s_a{A9A`S#y,6| ,r=I\bVWʤ!'= A.8% bzzǐh)vKw߮M]v0_(Fsbڷk@Z1Dd26`]Z+Z\eeCX#J<۩. )id\wb3]}XP} yH`N#Gx 7t&Q`E],yL]Cߝ?a'i{@Ҝ@.56'IZO,۪$4Z@* ?KɜI3 a~bHx BiHe{@!`k7!p%Crmˁ]鋱F"ϲϕ< SY*_REV\:е&rZ5V\Ql˥0V* yf7VoKkG9+זrm[gwqW}\=sc#-gȨgla3cSƛmLJ;Z- ssnɨ܄%7xknל8G)íҢK/k|P.иve,Ke6r-ۦqe_4$יE\^FF܄o-2 khYk6ǿ!H89嬐m M"e,Vwi:ɼinq0Z}`EUXQJ(TpK#v ?MZa-Qt#KD+֊ɮ}<4Y+1jƮnu%=lpd2T:ŋ#TfBjDA2x/IGp>`2sN ˲e+ _8\iI$8oE+PM.f|J3ˆΞ$B;AΊuۼ4Kn?^y}/~zc  4nK8tv%:‰ع)͙7KZ, kݷXLfܟK˻GcnVfZp K`Ԋ/7Z'-yHj G\ڃ+b3VV 0={8WTk2Wm >[Z ل8J"?0|s'0ܗ WxƅBp2-}A,32x}sr̼*cw¡ !^Uw{wk#jiYO'yAիWzHhw<{}1vb?0-"sEyN)ЋrS̿{^ʽt~ԯ2BfF6P!)" [#:D*t%R,DdAt_ʶ`RLlz4?ҿ H(X5̪IS%*^Kdb+`LVQ8LPg,ކVbO%LgVs ڑ[Tfj ]Óەb͙^ F{9Fvt1v8I~/B@:#z>Ś. ^ O!%\0Z d@0 8 cיּA"+tsi | pI1\ִw ZޛIHjb6✴.FV1I$*2s1k piUik{PѸ }9nHFK%uDLi]Nyt>t/A,Gؚj!usŏ޺g/&\IQBE&ۦt~ Jj\o.xIli1!xBҠS=Mpe<_kTg#ŐmkAGMU)r2d|l5oWa|X-}aͭ3q\e窭z\`Z ;iK >Ъ߈Qɓ&mߘƴFD&A[8Cp%SIѓ3fXW r1Ѹn°5]\Иcڒ=zleLF]*qRRb߆3wFѸ|65xab&rMȋs@q1{w q|: :rvuR*&(Md2o: qOK.;! "1qb>W )TÞx%h( >xCp$a#)q+\JY;\<@UПN+_ZRD_Kj[>@Lv:ST&ߤqou'_6+ӕ˳AGx5o?p%w]k=~pe.\DZ| [Eo;\`gf3b>t0<3N#d<X];%qi^:g%S`6 aK(ό˳h@:/ϊDYYb} 7O`cGIF3(Ǯ0eBB.\Ӭ3թI2iw>ϩ#/ `DX64 Cl9"p'4/#{sQ矗Frڌ֓DMiSg0eO]JON{p~=z]_8v|~ǽ-s1a h03 UoF~ys?篇pŎ$~?0 Ry B='++_imU}s\{}YBޗ`+?M+ kgA{ppOn|5IMĪ7 2_̜sOI4ϓ|Yh`IFc>/ξ})+dϊ%`F].]tG ֌lJ_^ ^OHKp=? wY,lvNɝie~Qsn981pz%&(jvzߙ\ -k4ѥI4b&2}Lg ⇏o7&6zzF,!ӭټbpCԂyoyi~S3iqpen }rU!Lk)_=bΔ Hg=cn܌Իe?緆U$ 3}ts.wj+Bw?A& a0^*߆džgë8鍙И޺N W ^0.D0yЛh4'jrk8FY20(ٻ޶m-W|݆8~l(nMaXīcZNȎ_dj`X<<$9g_/F@C;2ئ#S^2y%t20̡).'c 9S!<"+4$I,CSz2v@clȠ1!i3Nd@VSlұ^6*U^<{7mcT:BL<+7Cop*}ٽ$K]=9[o`n/mrC{Ӵ!^{ mIC;5b:;tyf^{ پBftL>565Ɠum6>jcl߮## icDŽ ̕{2;ô_z]("ZĨZ1ۢ\F}_p_<ۅ2LbvPn&ux'9,R6YgAB΄d`SSgt;UМ> }ܴ. kL9FZNo5Z@渙kl!u6`k0^{w9"VD(j1YϥӾK= BP^)A5"c18"Ӻ]ӥO\vSM ܱnB{.#t}H[wPSTYx%dYYb-?pQ"\6زF bppqZ{r&zjn`^PBKRVz*D?xS]Z3|ԛI[Q_$afK %qdBjk?y{ٺUyIz~ɦwP-W;/vB7NӸe$e*fHcə/ Jlm YT1g,k~ /2F'T(ѺԸ]Hmjh7ea UK,L]n-drNY#CӓPJ]>B m924MO2[w7CD Sz}FPR \^i_:Y"Q~\x%2 C[_1/a)q=CN{2o؝1AN޺˰bu)='*ȏr+^Q>oP|# eȉֻQC/vEamXEt:vYCY!4tkc'sy.ЎAS4AUMWbYo'_\͹؁nçdzCPҝZ,V`kGg΃b2rmLp(Byҭ߆\(0,;|R2- *\2H 2^ѴxPyEMT3[tEiySs}Չi5[n\;k=KTNS):n^cnWj7-"EOMҶmo&8Rg7 )8SR?1A#o- Y4dM$),aGX8&x_6q7G"Vm4_+V {dR8O vj$"|vZQgsZ)pBlpJ(THkU6pDZ@.Ar15r}_! qk" +EJJ2D2BMB< TQD `0':얧iȂftk4Qad`[E(T T@i-՘SAql$ЩJ({Mx 0f"$dd-.@s1S&( -ցա 9*7^9r`{(סc~erMߺD,<)LҼlIP(s[$8 #pwEp[(ՙ`.L{XfI2M4˫7^412~grrB Hq^wo~۞Rb@N2%ɫW yDvz0χfw`ahuF`/ݓ,f:Jfs:2H\>I^ٻU$^ssA)*Lז6bf*%yh s()*CK3Ɇ{\ E!C+|R\( sj[M)&t.[󣢤*m?繠⑊; M䧰b0sC+ߐ-\-49qsM`IxQj {6c֩f|g|tn={p|H|Lb3tbk E;72k;-]zR!$$n[)`c_loANr{i7;DIa ?9IB6!UhL^QHAi"-N3!\U4#BgO#Ǹ97k\hP ")a  ,3fHP$C^5.H͜"ϙd{A*ZPǿ*4ֻ^'Ic3+T7 Ѳ,n1 2!2j.CoWd[uq&oք^ Z6o/o@֨6:Ux(ZjSp^홽kOd]݀_ur! u0-х̿ӽN p#k !Z2mYoZdu,\(w"g}s[a FwcA!>^rȤ';gĹ RAhXBn}%aM7oe`oMeiWOV z[8~ch5N& H5VX;\\v&.wq%G usGnoz#TEqL a 1K *5z X&#€FC)ZT""!rzqZ+PE*ۡ߷wtcϖ 3PUeώv_#^v| dלU,Yqiеe ZZh"Cgo(?GgGyKtJuSHqWMnm]".c99U<:ikhl`-Gd҆T؂Yf90ellNW_TI= ~+ҙ~sMûN/-1>ǣ\tty >_GAT['Ug'[f|y9i؁U}><@g5Mw՗xw;RQǯ޻~4ik{՟Mni#ɋ~ =wOq^ hsc'}`p2Ͼ^vdM?}yG &dL&m"`jE k]^W9 @yn8]CPenndvkŜ0>.2(uՍ/ xi |k4VCjACMHK ]@0*! 44!SC\jɴRd3y #*oԔEP<$y -} ͕Z.R7|''ВgQG7tOꪪ}S Kf݇0-VH3nF&1!x4Р P ÌEQӨ?9-0L2/95W@-.,Iq Ɍ[ςy?򾜺.?ly$QN8;qg#߸nHk fG&Z"0̱4{C0[ Y 1r0*#6C J |!=F_NA,l/lEmϥ=!(\*脥+߸Mv_bO6o]oxq0Y-iM<0ᒪ==<;덂aۜr:}mBY@ G(e:]Z|5f<[wn8x2m)cq1AJV~yZP# %m?s.\K=RuRO^chD5.( UH(Hq)%,6Ts߆)%(je+߸Ë^Џ9ߜC;Y}rAPHWfd#˩H NaCj#?I$b4PJA4&2b*2gE` |-&@[P+r;z%^4ѤηPO#Wr*ċ$,Y1w F< &0BèJqN}X̔BNmXr֌&? f{Wl2@aF82\Ӡfk00C,&97օ8[Ę)ny  \1\36U7@oi5N˭qZnʯq'd!<j4 bV4p+\_]Yo$7+^l$~=`>-`LK1k0}URU)fQj mE0  ~E3%|f]TOY֡;:D. 5LH.1;cabKQۙ*Aɏ2( bLWCgɁk*Udl("br ͇F>oley~XXΟ O˪Ȣi.Y_x;1VYM{ ik owвv5Fg(O\˷^?<>2]3bΕAn)ɢhv&gy}=<-2&hXΚo"FW_lHoJ)!;3YHH˅7}wMUv^t[)ݧ N9H#*w#Id{ N127ƍ(686g䈄rd.T418CplCBF0llp/Ohr' N wO)Z虥7wi"J UҼ%Rs2{7UVLwa*=V(8> ޷伭ߊ h[uDk%}U&صPը@}\m_m};x7rk4|#`R .qf GLe۴B%ᛚL׌um%S!Rs }wȻ$grԊId:x[ *Sy ̹g|/r99K5>TV2Y S3&:AHI#:#OA()l`M>tډx,390" V2$ΰZ!gD//p[Jv Ԣ35νG<ہQ:zS-w y '5ʨXE207~k*1n6ۂzT><13HJ!:Ttʽmy=wUw"wׯ(-[-fUW>SX]hf7+Ӗ5 TG}Xnrzsm8BE# X_Pn8|cn&ޣc47557*vpppN~\=|4V`} / y>/XZA>M ,P,sU%bXP.*D'td*nAZ+}LGmvu ^Fw_^f=>X[YC}ITtlNnz~Ͳ(\v{.hA>:xs¡ lJsZb.EaaڝRʘa(Zssp\@(RpREFO_ -Ja _H^PHluNr5@r5>ζ& PjSPEeb:T^Y1rG*睱!Z.$M)~C}ejby"wv7"$ifIb4͂%* \NmT>Rm-<7+l,,g OIyDfh4 B䆆^[gTkLl>-KeK3@vADq,N k;8k(J[Z *5Xk>lO{3K$@~}A^1 ^EZ=mYXO {r9sZ4~ϓP< MR*73fb=zRxcY禙ƣ Gϙ``') aIi@(覹 HdP}p[Zn&HB&^]"%Zi=G'Μ }sʖ% Rk+RUzACBMH`4WذƼt<}i~IrߑԺ=H>\^wqU/CL}Xn㎋r9JW{ݬb/uyi-,e; ,˅@4r,.h,YR"j7X.8˺㞛*V`S>c_>//p5xVs,,m??=}[+^ >=ѕmj\Wφ'`]fǦ>.i3Ղ[WWAm'8Ѓsvj' ulՃ+pԹBS\7o0?p;]{.|a`m2RKݭ9Z[9K9`2D&! 3@.$k,pP[ |dr1iJ c7NJ}rl5Cmkb>d_!TLA_zak۵y9Y+AtqZ:TXhʛ=|4m簫)ܟZ_J`՟ݧ#mPozw|-r6.^wY dϑ]~^BpH5#PZ1Vtud\1 sZΟOfW[͙WL!YzFPd7F%k"_q֊O2v XhpҪ LUW xFsU4(XqLZrG9 KS|v/im A{萒H)p{IEef5n,Y4I&@Xc5 IB rf;E)EU_$Ya޴) "Jw\" D{oi8\2#+؈`Bhθ~lf,}o14 8Xg?Ԝaͬ Y[Q.)+/[ij], 4!s`yI8Ky-9Ce#BnŗXQ*Bk+UΠA`&`BRM=-`G7ݐHr&63ΦDH3Pdbho7em`!TȒD ȤT"JASX#AC6ѭ |{[뷄nf+L9k}[q2аnLkՃ6 TXk7-ki) rVe(]=.MRd4hPxJ Ow4G}zGCk6U-X%@ NWOA1EVtb~`Ƈ"k։!&}ǜAQAYL %9gOj3gQLm)̞))ˢ lQ]LTvE!\gki(yvm7DlYH=d:1a<0ƹ+LV[a ) ,jXE߮˝6JxzHhTlrBb qhԄ6ug 7ކ6FO&!+߿_R E7*iqs׎qT7 'p|ܼ#*f1ڙ%f6EEd{Nڭܰ}PYw]=/W%ЯhOܲ׎@cj!PY͇U9X%,^ % 3ܠ΢St3sYFl{_oPgr+XJ6#"c؍V7w1Zl!;eEmfY`ymDZbbjO(aW+ՏgSj_?֞vck/tV}vmQn:c/] fͶA~VB{z?n#J6ȌuafrK&] {YZ+ѿIA&p[Ftf'ePW@YT̤rrru=H@Zpk685\]fT74Z4N'jv_K;YbtuՋ2IqLRűbUZ ]IVd CT{%x8߸}#{@.ASHW`rdi,\r0Qq+$(j+|~'i~2xn$aDJɂKj 5dQ5.XgC+epo$?EuxwCgPL LFLL +D,ũԕ` 7ՂzU$ Kb9o|f@,K JG|#%j\B )Fnm .)i3̲JIJUFv,ՠ9XUR0$ͺqwsIf"`ޠĘ$2Ìi *JmfsFi<40VTZ`92y%Y%4@-b#S1y ˴USJ9! 44geO;` s3? 2BJS5>M&Ԉc ~l5\\}}*U.?,ޅ n"F_duG[&g"b~f=p <- BS =2lHNXaRo@ܔ^V~ן8tZYWL) Kwij G*58S230u!00i%Z1BgNfhO c嶧^:i& 30+mZ\Dw;(BGWN,szʭ6K W N'DZH0DNR,cUP*h70ES?$IPl#]?įgH .5GS7)777iñUsn`kA`'F<$nܰf t U`X(25 *a/4 KTfq$ Ereh[[xSp1m ݁EED%8v(`̦VzyaD괡1)K ); 92%#ko*HͩF[>n&3EXQ?93m%_g&W3G+j~*/P,#0s&ٙ0eٳd0g(Lq_̽C ~c׷Ŗ4zh10@Ǔ>t0Wpt<&^e>MHO)*$i겄!$I0a*ۏKvOU^O{up׃y ע1oh}ƈVHg16I8+;(̗`j-TQZ .3: ~ 1pQ 6kIU;^[RsZ2O4( 1Ȱ΄o x<ߎv恬pvmr~Loޏ1hp&&#o܌ ]^Jui0+XPâ.}wfW/2[B>̖cbԔ\Q7hE؝>VpmT 9:\-M=e[a mS4 bJʁ#!}QLw1T8XrdR%Zp .JIq諭-&K `9f{w -<*d) .Ϲ Gӹg{3/-1وzT~J/i/v- 8Ga n'sS<\6-Ch2MBp>&l||9]팕jUcT?XoIDa\7^cҀifo\0 ;]&+",RQ"y_HjhCrQP uk.,Z;V5mCQn5 lWEԏ7ˢ#[/Ũ+^Q-PK2NNZ0e]b=kl#E(!b1vJ8ja56kfb*kߡj5ԘQt;Tb wMBp]$vEf]3 mՄn8;]3$/6{4Ǟݼ7Cc27;zhufg_/L:S9b|G)V>*p[e,> &`$Z %뤢d]ַʫyE6}mBHѩtc0L+@eSr܁Ax4ǟrV7ZrT9j\G9AToIo7X"տ7-xZPy$CY1$#ca)90Qf$Ӝ͞ vU^NQu\rD &GyB|ŧ.⻟fnoFi%oR޿@q9]ܞ/꒜Qܨo (TS%ojrxaD(u!_}*nʩ!ećIJh: )4| i4Æ(υGcD`i$4G`t*+癏?gW0u0+KŨXwDTti{,bC(lRCfyA2^iCS(KJOrی0aUeK7Y'>QAooQTkcdPk J2Jқ|6/:ʺU# ;yzvZ@mjdbpєqc?,-"|KB~.mTE?ĆQ5id%|9ǘ'9UՔlW#+<{|)NWb?k=7D#{~~K8-Ϝ$.h*vAH۝s $/dCR9ERTHs}hªP=V"TJHc{,qr$*2_s(!qy"}fA 9;̂ҡlhdG2rM'L,$yL< ŰgC>v\:65΁!]m3|0St I}tfUTb~(Sڜ'78{jxɄ¹0q JR(-GCrk(5St ~XMN-XTEZe_x^ُ] $e}O !0gue."t7嗶QS;x t׀wiv(ʝ ?Ih)NBj@JELu'v6,Uyd!K8C 7ZJ%Yە&TO݇lsM~6/\xZE*20PR!ЍLꮫz{,s¡L Ȑ$K'nmӚ9ټyQIL΁hgjdySqDgX>N (fvWSW[4 Lm8>xv0WfdUbPJ&8OL*QR-/) FS' zP,5O!{Pb'cHYܩ(ZvjBV9zHx#&;UD!dK+/exQu&% 2 @UqX+Q29/"bJ:|Av,6R7߿%hPLޕ̋Mp5DOE _=x>upk_4$Dg>?d\Q-x X4qzKZ^]6V^R?UXF Va0L'$Z)]ߡz^k?[ve _?X}DIxD-(sH]nba`\?J2$]jf5+!Y動%*9h,8K NRey 3B Hmv Au  iN%shyz:y}`t;_I:„4Kg6(* 5Rd36H1GGWOqK_K`a *41̑8k}~t񌢲|وG,q((_(M| !JIS *hDL7}g=%=3nմ/yNDÁHbB+ޗ.1]O%=Svꚤ@P]n3j epi@gYnW̺sb\tWu: 34ss2|`}Mf":ph15>`JI[Etgr7fp=PA;qn^k gzt!X'RSƒivid_+yWw]bLQa!:+vD0Tҋ!jq;j{I1CKO"ɳ)e3ڕ s HuW&LiyL9U57y W/^> Y.+uE'gZ8B^nw~TÝ,qń>l߯Kp(Җ!X"9UuUuuuXz:~\/7ߐ<0NhOuX' RPv#\ϧ$ pybjYo\8tU4?@o2zY8ƄܙU4ؾ`;BO(m-榠h3&Hʝi"x<BNҘd-T/"  Ex|3OM|lmmЦP3Nt>fgzK~L:x𧋲TV立1P-K/B&l}.FN> D ʢ3:ϣҚ@E;C-z)-h)1k*qӁq,ƒ-h䝍o*vӹ&Jǩm2>,v;e,%GE[}R$h}&% }qN-qdSF (jA|DjNt=G()0 8SKT0%HpCnTݨ"QEvUGB~NC-?yY #sa~./F*;@d0EV`{9pS" yU1$fGӁ9M+ * yL EqТ PSF{N"sQIpA#>EoyоeV*h5UMB9`cThu{Cs 3Z32vkbd˹?Z2O~GRPav4{EդaWn=ʛ.rM;fYU0fT wSOV.ۓg-yAUwAI ONd&pyKg ޏю|NGjS@Tͦqm N]W e,vwFt#y?}فJo>^ـ):G)ܸ<!8, 3e3k9s0kOxMͰ3ϰs̰SSe^}~tv |am\< x *\:!.Pf݄wo灏 Nh γZюi3*ͧr,]Li- ! UJ"ʜ)!pV1RSaP$e2DHx+~Gvޠ )gZRXXgh7C>95ǵQSJ4$YTq.t?RsÛk( z4)hMEkGszrJU,&\$M?qNN_BNJ M Esc\!!9K FMVrAv' `kzﺝnh|ɤu:\vkلh 8/Y%k}Q9&Q 87œ:7X(PhP.o'-jdA)Wnݺ4<ŻDn%f|OLj2/dzp,"VWsnq sZ>޾[`2k2dAXq^@uJ__wQǓRc ѧ.~JOO;C;~4UshgwЖ9:٠͑\_M ׊úwoqM50I{49G=\LM}rZ&_%U$!)g8@f}fhӫ?(ΦQuI #hm@`l-/^TSaK٣BH5MQTF8:4Z _!A-Eup|C98Ł 5L~n>qMC2suzҏݹؽCj:4;^*3)Rʑ\:Vܤad\qc\_1?Q:xa b(Yda W.hw&_|_MKinZR9-Kz%;s5aJ,:.C̆ӅLƸ>Mog3KūWU|~]>v/:a`l}jw"{V%x/>Z*?i`O68Ãi]@@56v8%7D6ĵWBIXO b=zBEP%D^[yq#_iTS{yK9[>h J$9N|yv>:vpw NxPܠ7.J$c2E}b 1'{ύ.`YqJq?c]rd Ԟk zV5!_XQxI$եKG遒G 7M*(|gM& FiԔRx}﷾У}ditfnH2\?3?Bm߀*!L:.Q v㉋ԁL{%o;i|sݹq9'xZКO`46VŠZjMZPF1;?lu#LtR< 4qIaUzlThg1ږh.&E19T(~E(ԝE[ΙCǜLKgd:Oɟ{QT;+W!u ĖDgM+fᎻ0@%|OJ7´RK6\q- k6N=whDpJV9MND&D)X;jERݮU -Z#xfxt+S{bSs5nkhTa~t8Kh4x ֿT~<_3ڮrR1xd>L)M"MV .n}Ǜ}Ė*dK ރ5%^2 ˆ"'4Um#n38jǑcg U%HY bK>;8);!Ź NAA/!gޚxGA>KEʹ?f`b\5FJ32H0hmUra3mIcMLSCV$:xAxeCL:F> 2kdɆ)R*(@4ă5DP4GDᧈL@A5LMy()އ8vw f?(<>L|@I'o̢ckp:$9pg ",/1z,*$^b3O` /lhdlW+r`_6}7!*C߻Ҋq\3xTB>m(D6' ~BQs&‚ D\I"@oo y΃~^Yt$;A&gݙ5NM4J R;BDe+z^2JBI0f)Z5|:\Q8} id TZ[9[ ՜ K.QPpiB jV4۪"hzTI*Pz E 2.$IaQL?pWiS JÑr'Є@[j"q-9Qdc@kKE@w:Zj]WXIfU$+)W%.v><6y 1K fJDih4m5 keW juJDsςhH9uT\#R!aqK8*ZD 5N a 4M  58F hnJM~R^U_u9YFtqu3Q+ߵ\jBw}RS|jG)RE_+ɸ\r^°O%h0K}H.GJELQLQ YŃ㗐kأ ҟZ]%I5pJ=CsM7bAK}z";nJd-l;^$}6n &bȨ4e\,VG=c8 2|k0y{Q0ugVcbBnTH{rUI[_{ G߾R.{dSMfgByG(ȇ,Y$#߰lvt7rUC$Rv=|(")(8و96N]8 N\f̵8^!"-)'S >yQ JAlzgty6$<)%x圤iݦlW=TyBl*Ŧx)騶U66ͥlUgEWDT~".yыy몲._3󯪺1vq! cR3&^60{MOUXp JޱȐg2l8e9i4#^&Є[AXo?Aߨ϶atWO=gXU)4_ڀx g BV"ۅT}gߐ9ƪK[?xpۂ|7IcgsN].x pMGOzTn˟~S<UL0>g:" {Q߼q}뾏gO?G$j[u¾ol碾}G.dLkB1 =!䋨2^|ݹ(/۽z&v*B=ۗ|/> ĹiӼ,eְιS0!}1oxb OZr "%Lt^gZeGc1OcEp(8+6|Z8 YuOn8Q,EGo5VJ`)'^&>d)˞€on㝏c[ R"U/!p`TqdK@ڐt"ED%"DRNWw^PczZ :(|n:f?Y J7^s#guPjC1 ^z&7odI$qB&\ lZ:cmJnN?[hgH L\3K— gUqD;wseʮ-uV6VB_&l*;(6XIlṵ$ei#Ֆ[8'OqN ƲlЦujK{FnEzOk+kkRr6e*N)J-B% (u 63O9 u50PbJ5kw!bnÆPWBsb1ͩ`.mNuBRqq-])qgB YT 1gcԶ(kP9n=XUz1hqv[ .xK>.}*0x_ps.'x*E3"TӒ2t2wSkҎz''礆w@/& |1mnBӕb E~͢vUߩ-س^*7Ǜ?2q7?Kg;&o q\[.7=?}Ǻ۽x2r|%%Õ27Zluqٜr7[KOj o?֒)/b A+ݻZS]9ZA7Pѩ9pFH!fA]s뾣YpJeHV6 7MHȟ\DkȔNR<8ԯ'J&QU ǤbmBNʺfgl٨.x\18&/xSoL ŞrO)G#Wz5iA=vA b-fyJfT#+nیbEv+:$+~MJX|JП ݪ/䊪?qSht XG/E)g{_ßLkQpڙ_BBߛ ipҢ*":OFN4x*"#:U&> r7u&mͿ8Gb5O c+(H;.&wޣ^ocdXEfJT/K>RRI4(vy+c46,u8Ɗb$|NV(bgcP|V= N辷`myj=ml?5!a<M0ȱ.V4b+q$Zr}O_F˖b!〷l2nmwL Mxn`ݵÙg_4_e0ti<%I;G^6 ;]zR{OJqLwؖ qjkȋ@vIgonf;L-{%{t`3֒~<츷9"Ke Lt> IȌ/oL kxX-Po}<̛%;K {50@j(l7*b(FE (ew>yA{7fy#p 7P}Ըqv=C.QKJIͤgL 's]7v221| dTSKCAYƖSDANR#RɅڄXOKaLLiQOwIWV._1u#2k`KS.VȖ|0gnh<&hF2.oLdKp1s( n7Ͱn9eߑw,) ͥ}C,Y41b"Od:[`GٟBٟotl{K@SX\ӫ_-n7y4w]PBPtJdOeoL_o. D1@0 +g{ky'-]g09<&=҈@"),)E ;cIZJDJARR{BX=촠]PJ`#F4X*ʼn,m‡roDW٬߈#z:U8 pݴ۳?M6DTQ voTC~U% PSʨh S7+wu?9h Kl0t~ sN[n*#;FOM𾧟3Ң>7GG"] pHK閄u֭g+o.yCm@XT\Vb/XPouR+ey|q#UB*qHKzN5k95 % um&-V>s^'XkQ6ENK ,Uy0ozcN-Ši~q~6g/et=8m¡8m([_KhOrW5L$D3TCLԓڦ]l\-z*, DȂ]C̋6~Fqm 1b$Q2X64'VQ->" rq`uم>^s/pvà/>?s@Kh EX,S0BOp,m# qR%}`KR;7ޱ*% :KH)FH"R> j&`84IYBޗ͐([ '%C:5l+AB CF87>Q1lXz̀X'4 TN$oqF6˔ fV(T\Űg VR#Ž7=+q }|Fȇ7aTM!u羐wDSj{a@w|!4E|ZֈhQ-`}c mK.}7PbdqndgkT9L':9ʨ'0? 'ƈ:t8t.p̰l;ɲɳRɢc' 8xn)0-:w~4y7њƛMĂqveTD)p(#LE9J "k!4'{ӝ@kO˂XQbV?*eƗOݐUUP;b[-ȋx\G1`'P \pn E0>)΢k]x1%l@͈訮豟Uz(\Pվ*vרLYǚ1Z,ax%5#_j&pr]y'Zϗioy iUFj)nᡣG(C C~w!g/@ou 12 <*MB.y9C1oTzYN<32uŨETXETIP׿}@&TqTSP Q`S]ump|tbjT`uY]z96_Hrf8A(Y,!RڃX6=Z6Ǣ z=UCA^n Xo|Ik?γ oW|;LG?&77͟M{M?GNW#3`ʿ!R!RE vE6AH%ԥ KcNIDaCaqJiMìSJ4EC4İ8>0{=}W4D3i $N?{WBO;C5/؝]x e%uUO_$:""hGRu5pUՃ}Px 7Frڵ`2<)U7IYMȰp0_|HP,gVfʂGd!c1g(Fg]H!޾y&st9'O"n1%spC0EYhB(E)0 ];}е%EPcctKХHփPZW_ _USpeЙkTiËnVd:her`x%BdxG<$~r+ұ C)%B. ^8F!a- NޙҺc$U:Kxi9W\ G+Q(QX >^Vb֝c >謁Õ-^A0ݷ|M2iB!F !(6++W!*$ 5k:D9P+5  J9pmY|nc Z(R3i4 V2V3Sav67q=F}yWj}^poxDս_#۽] >ʘwivδMCݸfMi5mW7nOL6ojҰ/r8C\V17?wE.(D@(YaAIA-RSFvڨ8(Jwz8EYZ.'.t#CY2r\ok|BFjS/G 9c=I6zw!C^ 7Z:n0b5.֤FB~DWgFOU|H ?ȋn^E!xww[Q;b?2>U<@/[:%ZkG+T>h; WX> qb9#0BZǎ6 gw{nv(q*UX֠|<hm5jW$ &oX7j{кAi;FvB?0Eͺn]̐3){=HYk)K$~b"9"!6R1E<W`<*S}"MAa˃ :Y7tS>"cHck wep`_L#Zh,68Sú=#@<1gM0FcmFP9aȕ⧪'Sb+Qߔ4)=2:^\G^Y1 VDqS0Ie3M\Bje[`TnC-@*UAl4ўr+FgK Qd䉋+ 1GȍI~=}e$N\ޑݧ|3}aE,g6ɧIbNj̷6 4JyW޲_9(f~ nuwPFYЇ{_[Fӆd'*;"ǿ'4ޫDJ2TfC&L+r˸42aY$WzWَ1#]Ω1({]U~]%_!Ο^fcEp&:v];-R$&Ϳ\+1w1ِCd r#X ‡ypoE]1J4 6y&g`9B^fy2CbLѡy^C*fi(8PuM00OhYxq1Ƙ-Ώ{Ԋn|Ӛ $3=iJ|9pD@6M`GԳ_H9{vxGvSoZ)Cg38 DmJh &=UBJGx(XNc]`1q*7QH"v AB;$,r5=z A'^O]Dp4ȒU}v{QBa]M9tt<2@n2TgZH X{#y1ք cFQ-%1/*uf DV8`tlt$ֱ`(mIt'OaW;aN{\61FcR !?߱9 -3)41&I6Ef%ë;& HZ)@J1kWRabMQ.D=vi̩yK*):cH Q{,+f7"38GJ(ڬblx WvV]߹ܕי$0 ʪF24:XX%Z]AHSS0Mqq':pyD+ɌN:Q(/ slAydlIyr٩3Vt8ހlwEoOH˻#(u$f>!C_LV(}8Km=ЊIw|QZξ]1Wpy:X~xڎ1^d E`_g2hgy]$ sL! &sKfS)smFř>S)Dꧺ]쬔2{h_RY 4PԛN|?׹~~,?ȥ%oe^ ]8R֌q/|Nj΅&J|:mDJ=mtiaB]K*Fvn;(xsBZڙ2c}+IkG.ُ='Խ}?Wb.9lʼ Y]fJ2NԹTَ1Z_Cm]6LxuiŲ >zh[Z"cudT,C!y|r)H纨YJ2_1=.?ƒkb (M+ɕ@9пiT!V[MRMØ{`0̀ts!p& ;o˜ѧ2&8H.FеQx2!B'YdȬ,Sd&uᴋ8a%jČ{ݣ XoJsC]1Med(3s Ҕ'd!/{YzYw/nc M Hy D]k`~By O|q-1&j#6VaV8j Iiotf7[Q^e/(xc}jiz&+rG ;yvHA"`}NM6#\MSk|:1^ V}sSFRS${P¤S]Hw`DZhpk7߯0q  ?_㧟"e~>(of7}WK-ZGQA0C _ʡmPk81↉\3hL (';31!Xj)78LX,nFɤGH`@$V):^1i"Nn! ̖z$ZV_yH櫳oECX8a GVeyߊe%XV+6|7]0G `/^b/6xuv< zUDkHUtRmId~[o3o$ Ee?Ztr5 ,}a67ЭB3&l~`@dv5uk]gP1Ҍ2r!EwɛZ# +-QoQ Lm~IapIWi8: rA)(3T0)aq'U H wͺ2*|& 3eq5eNE|\.D|ώ1 `JE- 'TC:M$5\i5f$EݦXTgœL켄\0\t|0q)MJ%.WA$Xqݫ$+ly(Z3@ys]ᰰэ[5pb9X&xZCBiJy+ 94d G0RO;06ȴ]1 6W{Fd`Sr~~\HbgNr%m짝_daq(@avnyq64z `]@3ì}OѨ(Oad0u4-X4)Ň:!`ȲIEED3x 6)XWy?xw9]-}ݐzV<,wL뒠dC~*wm_Qü7fp?dQd^MR)7= uPEXٱD@@n݉fxzιdN~{rsӄB5 X vچB<ѽ<,aUȪ:{71a \ߴhEuTO3]\P!!xLa#TPp)Ť{A< 0  |O]!,9  QѾOuҭWi~ R)U!FKm("nj0%D D~%X0Lo(/֎k0r]5Ih9צ)xpQ/pܐDQhŕ/ u7l(GRD̥S R`Ёq)&>.|6{V sצlEkSn`xmJ.6°>^@甖ORiwET?Z !=狺k?"P \_Ew[Rf.lvQ ^%Qǝl6S+[=n'83RON|3:S-co +D-ôzj#iDXF-ŚK^_Z\DB}I_WCEc{ԓ6O FΆ#}>Mϝ`8#n[,9h^HY#/;vdɪ3 l10*r(ay<3:格h{"R3.cQHgj 1e:@[!I5|IVf1yn5 =F($*箣=K8cD>9]0t'MSu]xBsCm(<pU3kִqgsF=f_Ab`f.UmZ;c9Y<\j8DIlZ(ҋ۫l)Y>9!~`2 |(2LbsT5s*3_|4 _GcK'8S+rAچxۦqY=R¤yH_6 "Ys2W1-3t]htq7r9#nʲTj)M_6ucP8¸){r/UUcڌi+0xcPXIUśG3'7'Lь`Ê.IN.&u:ܻw&Ucl\0т .N}~ l.@ 9@I3̘IZɗL^ va/gSD'h0B5Y.L͞v, Z1&:M|M٧ ev X@g 1OwQ>}:wo'm48?zǧ)׿O/;:to?_|m%B6Lb'l \7lKE恄к-y{*<mKm@P+CuhY|Iy7&4OɼՇcWo~|x77/3:o8M FMdvܟ%da|O6{3X[, mF)_ݙ)iߧ7& c'n̗Q400h8]Qaxa4yL,̔;rUضn>k?4|/e-tF 9)ʛN<3X~ƪz|'+2Teߘv60<$=z;fOZJpFY&/o= v9T&Uο[xw #Ze}}-޵~oWo?Ah,w&Hq|uG]-MqQu<p^0e}#T?C םNqe컝IӠ7O$IA1Rއra4Fuzpd}z;ތ'g o ֈD'th+x2q<|;f;6z먟*qI "$\B&5>>, ϓܫ},Axd,=ܚʜWX"J"O* -L0lGi2ԺcoeL DLfr$c7QǪf q!9zI#iM{C?|rhpaOyNaAqd''D+'Sd?({}|ſovռy46iRBHo{gMVĢpdž#9kh2 A}p i ƔM7`zy`9=R]8Mџpz |cBSkW;x{>+ 8@jhm)cf,# KvW.䴕S$ȭܢk0nMܚ[-Τ$Us e-VxSTJ(Ͽ ^;& ?ǤrH 6^ %c 0i:^յۃ`|h#dߩ@%!pkQ*ac ԲT)Hث%]8ɍUރAN`h8~`䮯RتTW!gц\ -AZ"i q,lc2pE9Y(ԼY,>ûY=0mxR ־ ·"FdDb"y qS Saa=96+ReMd B cW>:iC|+VP 61uZX&6%6LO:Ϯ,ܻ;lFA֡`~ZV63 B&c*P"B0\*n&+ L؇^''FܘރR`Sh F1<, D\4.tFQy39 (5#,0E{w&O0ddN>5ۼt2 جpݝ`R_">9L^<{ q) }fpQA5 dS)RZw 8!Fnҹ[n.6.or]i\IdR$Q68a4}Cc3%j`=''Cr~ϚY>X0SGk [@ ١Ol|o}kY[OnGw9½vh]2vmyw3OSoV0ϟl7(lٍ`As'2 {!x72e5·HHXDOY.PVpaƄY%4DX3K~zQ1An:YärsYR&C9 \ +{6oSJpTfhvT QD:X"ϛ|ŗ]tV\.y7MP 4ɀ wAD=ڽxa;7dMIr  D#j PI=w˅6 u%VXJ6Dq^셄Z_bcw|؝)WuL ?4T&ԼV"G[W{.*& {KyUKqSh=V 9Rd|cjhcEۜ U RLd `yȥeã:ƙ`Ȳ‹[liQmms8ӌ*vx@#|O)EGV& 0 d.S@Tb!  TsmI%2_MjT!p$agf8W:4@5􆷩1p(3L%SLa.RíTR~K`"#]Р"lQM1f7? ס",&@\8OI0I<{K5;ƥ[(wtn]M*$+ ݤ%|;)p-'o{{1} "Dil$j}xFx8uz _ _QNo ͨۅYP3fv'3yzt~*wS WTJќ"r#޳MnC0gD?FI(BP)Iz] *֭V +P{y pC Eip(Ax^j-<Є>-"֎[IҞ\C=)17&uۣLh߿NLnONѵW8*!R$+X6⸼, E,SbՌ6{5Ss1YH pJm̀\Q] \`.1ov;YJiKL[Y SEYI}KUh" 􍹈j @N`vݎ?hWMku" a]MןUR.ʄI/gq?u"@ڣ \!b鄔 e*:\R)UfW.OwJ\@ }\XFyѩS!=ĎyϮr`U_KnП 25OSjjY.ЗQҏUJrMǞ"!'֚uc\hm&B+TY^8ɕhePWU%6Ǫh3[N}6gDķdXXiUGeh*ĺxihBu/K0d, _RMFu0\]H+h% +^#%AR,8긇6AIˎv;-,1Qe <'lƂl7qŽ2&Iһ+՜P.*q0,D$EUq9O8<k#G_%EOEY >-I#k$ۛ=OqF{^{= .XutJvLFga1 ~Ik܅R[__W묪[&\'O|m|I]Fr 5E D<}n5Oh0'!}rH E;i@-&I^#3h}6Ą#'Jޙ~ kM$JL)EdIHR&KUR(sW i8Y4A o@O)[N .+?HGX-"}K7.Y}GG$?:K/KU߮GV7=ts~J_ߕT\=)+Swg]e3Bțv(!z8x3كKwmzӇVG}l`Vz${ɾ$R@%@,p/xp[i'*Y,h&o$n d^ĂXҔ'gۇUM/9 Cf;o5YVGpӜ٧T,#A6FyB7=ZU YB˭83{V ~5k .о3{X̺'HWMR'l/<8\߯F}&}YQ[Í]>MW?bsTw2Bk-F稞d#DIuQtUQtsA {3 ?OG/٩2=9UE8\o3&i?Q:^48Kd&Mo]Q再y+) IdP.izdܪ w';9:c,Szk@@ s9R:,9uIrL mȐt}&I빐d&3mYa J7UD1bLG$<:RMBz>q'S`"%>8&h85-u#H\) |;ϜD$F0F mHZZ AlKi[rn64=`wU[a Sy0mQ[a_ORU:!:L_aB+-dqMJro]ӈ串Jՠ]dθ$J1`IW*JV8q$y)Rݡv0ᠸ r,?^}C#rHnBdB9MioƻUw添 [s-| MQT]j)6.iFڡ!=hXK+Α%e'i}ӃGWs;9G"S>p%,9EAh9akJQ}˩ Y ʼ}o?Ai=TܲuvQiĂT$PA;+uDPFLY HC%ph{#zRS6:XgCV4$IO.a^B{ &_ƒt%{\ª4{)ETWޯF]&c!+ jWľfԃ#SBLނ1IQ6=R'Vwq|=!3+z_Gw +J9BW L&9j0f 8X ,PbC#9NQLHV归ynu6xcK$DUf{MiԑRp+(d?~W{BZlݧH5m?k l%{ɫ1}fUhy C^X'jNV]Oڻ~H[9sC<]YoF+B?<## k{a1ÆF#GKc%7JUdId%ɢ`MFFDƕ_VU5ˋ8&޾`N+tN標a9iLD 1210u],wI&z&O&5P&z=ܷ6%cDanr|ݗwQuX^:2L5zIta{C鳯Ϭk4s {WBpeQH; aB!TBP O}0$r|I{hp(x弼+cdF\1 ^[!`n0J+ u"xtvv0-r1NLeG ?'V;;;w y30% L` o#q1hc_ߝdxT1/xwOzMdܭ89|snNi&mC dǪ71(x)x)x)xߤ /r ո؟weIsh.}s DL1Z 6Y fIBۓBlofrXm]>d &++6Θ- (0F=8-}H,z1s=ż}[SCx|8’jn@iȐJ{wvJU SQUV>_FDrȅ)l6d91$1!:f+gRcJ G ]c݀"/qcJj4듲p Y6άR7UC>}M15:9\aܧ\%zo@nɛJM^minv>$8ٙȋ:VFgGsw'XCsxyȱ+;#[ܛ<>'6c+)gБe4㉑/%TAϕܶ]rwyP rf3ي#*n;LRT |&NkFQҡFjz)9!p,BnmE#\e3Y1m9􀼨&;uNpFpe.#! SvΡuSQ?ގ"Pci2C©7Z.0}^Dx{d{Gcs'S+pz1w?̹.ë׷p+P#Tݺ15k[!Kԓqilf@} fl0{G6!kLBE6_}iDq=Dh='b:g&c,TE\[Cܽc6dV !hʡ' Ba [ԣCFh'eӦVX+b\k~PYtgAnq,EA%\9$ȋ_>3gn~| 9s<8;t,N?y}z !p!:IAo2KQYZ:4ZO:4@&7N֨MGz`1D˅V1m2gB;WIuTF9["@1|`(!SD:g>/8/ԿL๵o<IYf֛!}XM UJ'P)O޿g_?lo{"/C5^{Zy=E*w󕒌!_ p0#> ט^xV[ RhCCNJ`}^ŴzP'VlR3BÕ47z /㯐|ukey"h226"Zi G^&SlbJZJ3$R"`K®)WJrs_0"Uu}pd8TڗL$HnK\Y@nq 3D&BCydV ݙz)JmH3JT4f: 5`B%0'In/=o1o+ws{u9߾dN-gkՋ{`S e7 SO,jg}>\]{>o?XWɼ?Xp-#>(e4b·}Ggd TB+CxV2hs4[3O\-t<.}{Yz4iEqd5A d,^!:LI' kO,E&㉖雟Ւų>o՛z>2 w*U>ni5KgaRP>lsnE-b Eo{j<(i5 9t>. t~)ޞ//-hl!=$oIBn*UJIek>VsT,y,i &#--"l\< ͅE!3ZPLcV@ZRB7UNxc\ J4ǎQRSvE$81W#98TujqZ8欯4(*-dʧ<;FRsEԜ!ADzd RT!1ҿZ:9ۻ.}ξU]RA.Ҏ Chk9B@P@&*Tye%d9q|\p$54 *-ѦH_gf-49Wٕ10tDsS{xfKIB7mrTwJ|@إh\Ttx%Kn cjY)BH q [ZR oUK)X}Ӳ"m*Cdq{%1}"_Og7D]؞_9=I˓:,UO4;}zp1|P]nϮKU\ɈeqEfj6D-gY`u^rSsH9s`MO!S*ߐ5P벜/i_) Vb&w3PDŷ3`8^" n_ J[8|=< Fzk]x1#M'uAY97Qo8}h"Z08wmȚAWVIXBnfo,Ɣ8&K6-11;Vj/R'T^yhCXǍ=샛 25VCܾ r5p ^ȗ@ TL|Y92F8*7Oc7@X8dx&vpj/.7uGGt Ţ"$zj<`e 6 AL %l5$Y  [k /G٢|l7P +Lwa|<Ϝ>mOjʓ r6̃_WW_[ORT 5C#DൗNOICגBzR:pfmf]_KVf}ldg .z3@no;BKJdqW/F[4]nӹX^_N2a# No~I HǕtÅZxI֣\@=%t- iZ"=jC3:cZȜkFL@Ҋ6"6F\Sӟ@Ҵ; 4+k } Oz)?@8!0xw,X-= ּff4v2/' pd1_KdVe&%5ct:*JŸ7dݪ1.H;hs5!a\kfS^L{Lk=E0BA*ɞTUM5r9Ji+<{04ffAy Q bAyh6XZj腎AJV~o0VivXUKXVŬ63ϩud"9V=ɞ35Enu͌$Q2:CiDL@1[Cks.G==jTf)Tp}|4D8rr4QVjllt~(/ky>tGL9X?WÌ섾x' #d6j{YG.1V!*S%+ӫcn\۾_OSVWWH|V3o#19?Q~];Ivs *߷qFL[Iag{a< ryAcĖKdߢmI-j'E"Uź'{08#ҶKԞ/fuT|A訝A=TZ*WU0iI84EPRw\f.PDe 7gt?sMhɄL#QٛR;R`Ῐd e9k+A=K,HѤEu@YPs$Aekέg*Rؠ8%TQti4oְ qE^%J%H20sw~])eÃYq{Mm\1>>~2O}|X8gmsd%n3;3>ٷ)~ʛ@%}1[󤟒b~bA2:OcxxLwU|PPyåTJ|ӖY+&x:M,\Et \1NWޜv͗Cb( V`$uބJ)mfjz8164|Ҽ= >>DZFsa7yW2 5Dii\ߘ!5r;]hS4P6:)lOT""|~/Ht3& 9v)w[9P{?b;s;kp]Oϵ޽qF7Fܪ6vJF-#F! 1Zh.~>:>Ǥ! jhqԲxsܡ+Gߢu {СPFHyGj&l4=/`PgtSZ}ѷvS7 ڂ0;3G1R \!ȃ!22c>|CNT `Z:tƟa Y(2NNkԋ뉗极e, J,mDhecPDq2<8)q\Pz)⥛A%L N8ݏK\UwB{PǹJO G38cԕo~D9 6Td4"HUb"16s)T _SWC!M=-Cyƴ)%FP}7-1"ȁ`}S^blbU @m@P?DWLNiy``UKZբΪj*Ta7 FIϨI xϨik!6!qW=fqK()d \p[0 y(i R+1J~3PI"*`C[;󫶡`3w3ً»gkRQP΢8F _ʏ p,[TUM L mdu%;S31 ȋ!A^LdA3I' *NҨ{M7})^J +Q|PX$Jʢ̠_\Jgj2_S<;\ F 5)^8[~3h5("p}Hڄ@O[T1n͎[X}3 ΍[XSB#j1a[N69}BCGg h9ـVdZ,k,kh*JJUm9Ƚo\=x%ZTxZΜDgR:m+`j]f-Xŵ0 7p>JG3T)aI֤ &'vb0kT1"i(G[^4ЪQ Ar ];abP^Wu)E5s\GU G))a$jT=f:_ }py?מzP*9EU24c,sYXB..D lɄfR%M!>r:N>?嫄Yx2Rڴ:($Ҍ|9I?vAko`ehųmR@ Rnz`?yAY'K;ZJ!sP-ԁMJR‘+( V#[bAjEJI+Yݮ QSJR(oq ʒ( 6;E|B'nJ+I5^ JWWT<=|ka>|gWZ[oFZ{A+q~RDq#_~"_"n>;F*+r~wxXnٷOb JgwAXRh%ǭ],:Z?rJt*W*fL{qbC)8N] $)n.oܢ{jReHW 4 A4}gd^8ɘ.S`g:AwwxdA?[C'$*Q}N:93I Z[HX!eV|Lp >v_֔YwN:l&޸w gXeuod;[}"+٫AvqQ&W}|IRV{iWJ .YκڻTUA͠@{Fz25$D L}JӼ[%w"TW/I>κq7$We)*c)bE/LMo"2cz]HCq>bOQ9Rx@k 8u WZ A$g{ϞxfOˇ'%C\C`tBvw_--w? !>-Ғ]]G(@VreU''[VYV[dXSi odlX_}?[2٤>^/Mb83?S8I>jƴpCˏ&+|=dJ}V:^ZI1fǘv95ٞasmG2 ZqfͭYm|T: 7JS%87g4qHyQ!OI REO(xc`p>~|Y{?}ƞ굹yq[n8ʯ:tShMh'ٯ,䉷a'y4y\"ߩƟtc=[ژ\7T d!n^6%d'ލAcnؘ3\x[ZnŖ-\Tv+SP/6 6nio2nBXȅM1P;PR3~:u$==?k6.ԘR2MPu'T plS | o .n9`Q/d=Y\E'!zX|oq5He7tGU8@bQuf۷}X&My'S~wl00-tL0{}M]=' 5?~.?ulb?,]7NQЩ!!řiXTI%HXfIJ2AQC1qv󛹧Vz7 !w 4X*W@ABD:+"/s(&AyucUu0-AA~Mna@[T8`9FB8Gw9UoP^ 2!"+iD`QRZJnp~䩔doQefSw/=04&~*e<|?~:Px. =Qܬ>%o/5XV~Y ~E{e/X7ϵ^#V$#$<]d<&NBFnP5͘' ōQS=|+V6sgfzBkjyy7MMrmftW!GGRgIcKkOApo(S(93*ItZ.W_odBZL #%)5^z/@d6jdZܮδg\ipgXݚ-J '"eCv7K?Ϯ-pw`ZB7Z&vN=v?ÄbD?)@PvӇXQWD 'ㆨJi0N~K7vS=62p THy0ScP|s{j qYN(=@!!KTUN T:-0pDFX\0'YL#BbCJЊeYŰG\2l?l(~ޞK}b7iI%+}dwmuO;oS/20R6WҜFVD.ֶ ]4%Xx6N p(m7Wv1av(\GrTI[Y^XTȘziQ{Z ${Dbdή ^2L^$ٯ֬4sR9n,GJ˱!)Rє%^\8.hZ  `BK.ic({2p/^A"8{X ^$H =ogؔM~AeŘ(-*^PK,PI)J)b(9(rNv]p)Lq%%- Vy92`BWAH6S@3m J+zrSI%?wO^úY89s^Cv܈=\k*'1s+qfVuyr+ibGZNu9ô;vKNj*׳niӮ}pWgsd}$C"ijAQN;ޅQ%Y=g,ط<֦U7!r<.j4? jq ,WHto,~}w?C25jt1,gO:WL_O޽a'J$" zHo2)*/_5^7fB{MZNvD|<.N9T/! %i.De4q*ʷ^^92בS%YuQz~TcYqgX?+*EZjdRSiVYkdiôҡ,/BJB舅JB%bWP4: V4("+UdcF}@P2`udS8;1 Ǚ|?^=WA;*j> &wPf"*6NgdUɾܠݰY7lRLхA#-gP"&,`: A*s_HU,$DxIY!XE DsUB\?"QS 'TTGu^(3JHSXuEZyu<&J$<\JB-P Ha)zA`y<6Y"ͯ!2oBA~-[X*®I&MR 6jb2ty.ͩvIW]P*?bCr<ןϦ$pvh69]367/nUQuZOu5w˴n,MMQài~Cn'-1ӠGvo-Fh%S7`Ēb4X!eu]ezw,nehX{"ٱn:Um0 E/ŰQ.z 14^:Qh]NĐ]IMF0M_+liV=^yET%-jC?)eRo")JplB-e` xJRiIT1UY(*DeYxȻ X澭} 3&85 G"E_P c|KuxK̇C-L>ycbhbl9Nj`54..*J+<#nE0%pFc)jA /Xqy_|i({sFz.=2N{%~;\n⁹`gV0yEoej2+&˻O{m/%stjK|g~e6^m&:IL\aLO::g7ߚn-+D3鵓k:?m7ty6ifVo;نƟߤە!}Vns˯ksҶx؂pŤD~Ȥ"HX/G %;6_(E 401M#NFfbEb;RZ^Ti)!!9@$=&y;t7cL _ x,FzLtZJ`Fw\*1DE SkA+)CgjKLCh3 ts é'n'CsSY9.ڒ:t?;l\bpQBpL#+p -ĥCq<j$d;+{ aI0c4trrdqAB? S{l}ODB#Pcu/0p7@#^ߞx<zD1urN`sN,[ɻKJC)[v#}nz2' 'gH|"MT&*J&[R ,dgU0y2a]}=s363R ~43BiIkH^qg3LwN{w-y^\uʧyƣou;Qmmcu#N6BŦxVG+WѣhK<6Ur_˵yv6t/WH9*%* aeI&9Kȴ**zJV(Dc ć-n61kۮ~uIY#Jtε= n4H*}azp1|p4Apg jtlEkӭ(bc[ gxr=`Q Os?{WƑ y n ,aIAd!q!c*E6٬bUX$Ū/"̈*֝Sl/=Ӗ]ji3N[!s$&ukQpxuw6I<ۖP⵬PFˆUy='VDh'K19T\;U?mZfOC}XE:Sad`+nA"'mi@*#Fxp2 DuECYccjCk،_gܯKp!~?k0yhiasEnB&S.H0 ( ZKe$DgNj/'WVVW!euT-X 0\Ph VW˛p\T8zr$Q NFY!(^\TdEEN6 m<{;h|8{SK)8Y2pY[9A'X9e[v J]j&=:_8Q&~Hșh%m Fyn[U NgnMUhQV.Sw .8Rrïr)1=c t*s%*17,]S<_ly0Zf.l4ɬr{ 9Xvx,J}O\֛A Ѩ #IX!Ɋ +iA9[Ï;Pr:jӛ9 RFarRry.'%(Z +AD 燻r#䏨U9itR)ehL5҇gz Rk8>2fww>(fb|7 Qi eւE!U1MPL&2<}_4M럿ӴW#$~4og%ʷbʚ[UCN\}ϯ_xIQXSՌ%z@>㈔I^l}fȌ#"~|M<8yn _q\ [XA.qu} {q7]NngWb0FEG21/g;v t *X՝D%Y?pdLm@Ֆ+ xZǀr[cYۚ]e`]U?e G:Of`.)\0!w C 1)}7Iʵjf(WIj^ޥ:H#b%9JP_MeC:N5u:X'ĺc|ݱ!E_eߵ`*ѶgL/7wݏX19WND PP 0/:,ѲX: sy`s{t[y^n;Flmے".'PAteTפ(ZEfdcDZnSXP&,g1䱌K!:o%9 >y3S>L5i B,vwWwcOj= Cl~~5{8F۷=sG+Ψţ-ܶA辰ƛ0tߋUgɃuާMlPI)Msa#_w'5vQ!w`jyldm?/ {zu(ӗFNt\]CÞ ymt+;4o"&Wqx(QfW=:]x-u3f\[=]ZAёٻE͆nq,6mB\[۔G=Z7[6>h^7;Z@1:lWm@ՎMo@poHiygۿۅ0~ִ{]( v98%DG}H2!0j1) jؑL{+X{YtܭwTiy|[gt+xG_*wcZsjZ%XH9E|^fAe^f^̥r۸JsIьrbUd ں@Vq=q4Y V KgV|D)7Zx!ׄWO&n"VCbE(basHEr7ݏ#Z'_ 2n3 cS(<ً;@0Uq g_ 1 o[??F `dL]jƴ&˜h>#<& (½p<}j9֝Su1G._&d+\_{D+iS=ˤ'+G#!g.UdJ#\n"[U NgngTxڭ@j:$Etg=Abꉸ-vR̜OO~6Ӈxz9HL7' :U AP7EI)2h9"4Xq]eV#z&="K:9IAqKx1up^{/v?@(ذ-t &C ^:Mw3e:q_ptJry%:x.@\n,ް[Q T#">O0͗L70D9/1~]2fSjy6զKwWX0\Ԝ{Gpha z`f JYQz T^,XXG2&wT`17BY&<3%&%#fȸ z! HA0iK$<*LR@ 'RF^@Ex5bM&Uӄi~җYؐ EB#Nn0 -ђ᠐6V浪F^@8 Yf|¹V$ e^hQǘUТLO>yc}gw<1>$(;^r,-+~puǟ>nq"Y op{r7 ~fӯ>[gD:Dϟ޴SNb+L W\IM4]*0EFj,s'( SIc;'+ z J7%zYMfݶYYN4W֪muʉ\IH* 3}V)46ՏZɮѤKW^qJ07hub NpD ΗĂ5v\e[`/1e! KE* {Wܸm7LC/?vf6N훝l\4H^%W7eH $@tm"<*N8*;QD/x6lZ1!i?jwU}Zz:-fbu;a8a^UXua+>Ga-իÖcWzA bt0/.ؚTw/Uk=>Ys#{zzVZN~i^k5s[ Li*-s_rMUXA !8lCA?Ҍ&mcǺڍ}C T@x+(L! k1JISo,VPk!TY.3QI`&%5Rf6dԍZ*FHW1Bld0pr3/ VmNv_\@4'~wu7OBC=΁~N3Y Et{Pw O?9&!6"{MgxwNgB[ 'B}e-N[>>)$0{=I c;2Iʙw?a"$ok~,Ux4+---vy3lB_@B"ck1=p4LfD2ˡ *royTHsC*NfGa!v]~O 5TG NG֎@,y:x=Ve^~Ӭ*AȂ[}vD~W ":p 'N0;N!~94IXRrĬ$bEdQ!H&3ܬRDB`;yJP 8υ*,\5E!35KwGf~ 7<v,WkEI$ v-[YA˺p~M)GDxh_E-nѾm"+0^H8OKyKM4 m psIu0clCQdwWaͫu QXC.KJiagp E.!xAWrG@UQ :LpVP(0bCc;"X[CX0'v*/#[ vSE"9 ,!jC wRG1bG8udjޗD;fW?VT?]/  {I!͊'7o3X-mcDZzWS#z.cAx71^%e q~FͿq5?ܚRUZ`{wٶ2OޯU!qzߦ_>By*L1Yຬ)unPIQ5)'&Σ;2 <)9#]8bbiS%ΌhɺMf.(6Zգ|_^%9+&%dxo?L,$9 J m򄀀]}1XsʂyN0,5(E{H1lܤ>jUoW83&O.[^<ݽx+La5p%zusYRC\.dz:{GFW_o`nhcԐ{QԐffc1:h-nik}QWIݗg=mMt%zp&駶ЛhKr6vR P@E. S>,/1k145R\ HU&َa J7,}h4p`dhvn6mCN [~}o|Z==HuJȁNڄx*o8IԿoّX4FaJK8 6Юp$T7%v=pNM)0Ǭ,&Wfg;Ea[n۸۸?OfyVݥ&Ȉ/·&tDSV`1PqƎ*a!A!u!%̕!BkNK-.5CKa1Pq!_Qj0vahCJ(YtAnj$ϴЪG00yݠK}g>ACD@𘖽$-|}%7<шsކ5%S:a)Otȁ8O4\[ py]ҙz IDM.u=IsEĶb~߸oM 0p?ׁdnڏYj:$cw) "݇;AR D*v\T5n[&I/n.<^*FҮ]sGM/|;^|EvKr,/\W$%n~ж ?}0ˉZ,UeU} ЩzFbu05*Tt}(x,*&Uj~@]ܱEkJZS]$¡t;eP/I.[ѣϿڰ.l6fVJɁ^H.m7wue$HZɪm(w!@B, .)䘋tLŮmKu$qGBܥ,us.ݕZp+⇤_16 wTSYbۨ$wz^op2hVGԧ.,մ^)UF[4ӏ$#klYr~i*+D#ј1%FP X7[,!C'6p#Vsn n)F 1czZ7)[,!C'6pQD Ew4Ժ!/|ƌ)ug:uXQXH Nd2NкE n F ǔR*jwMQ*ehZdDKM%A9gZÔ*mm4mIG,h!-[{õ kF 8t8Z]K9Zĭ<$g7^nЎi1}9R~m*@!P*=P|] vFEKX[S s[ [Jd_c5R{t/ P j!4*]޺|luCYt[#9TĉQ=U Q=['% oL`r_L%s]n iCр6Ă{S_ s|ڱzt \o7g~dݩ+qrV}FikO{x_ޡfbA\+\^f~qoyV-?9{I_]MW_;A%(_ێ<$|}:sH e>G춫f7nS$RdڔjMgv:UZk@w-4Ё.K@1(s*@FwB $%&vRAkvZBO}ӹ'J?{OO,",9]Fq,K%cBI!/*Y^*!탅ȁ&v;-zvǬw \% CMɻ)Y}/LVfrt":G9w2y dzX<+-0 & SXoYal/sPbZ`!dnmsJigIBΣfa3+w;G!aViM(fQڸńQ&* yy6 ׏} X1YCÃ"\=VaPzp4O8 "KC(ҔIhDA%BJf_jX.@9B|Thi 4}L2E1HdteYqXn$2,%Y ZH;*(*7(!/XYRb%+,D'w>x Z"8*]uti$&(BRXrD**ҥ&R@w #Z@B~Lw*׹c8ucuqw{gŗ6}Xg* GÕ>oqYLؑw7P;h3|Z &?lRNzpiw+G)Eۓ7Xsg+X~;M|;gv#)d}saMGY\n֘p7wս*t "v9[M~W/؊ۃLRR}C'7*(8T'UCf͞ tH]2Q Ns RV ;Zͧ]¬PuwԨ+@EaQ S MQ')6o2auñ a-]`f7mu7۬%~P GAPTf9͢Pni(|%87`VBF.9;[1cGDd@2Ϭ2I ,A4MXqgU@-{BJcܯ菃QϠ!wjZ}GnG>NQ >::`l*bI~ΖB 5#nZb=9ubqPg;!^4Pd}D#;ϗ~ )dkp)o L` - CmL8\8sJY'\9Wl~rXjeIfcn]1v؅?o+MJI4$S)8D\wjwŀb$1]21EN P'?|X!zNF>:6z. ne]rl;^8=A7x'9ؘ$XĶi$̻ZHaw]VuCHf)hb 8MO\97,a ?l\Nu'V*;~)~svpZM95~Zdܯ2%Q'BL)O9\x;LIxыsdQ=l&{ x1w!ͩ鶁@kcV4GdA@71Z1;7kfM~p(M%F"6E.ov4>J\u~O)lV[nǥ k.˼.b̋5y!D .QE8ΐh_%'ŕ3. ,k;Q/GazNa`\|&uZ򹾌>|n$h;>PbK2NvP%.-}*I ,2WV) ((4VO}Pp }]Q5VzE֔,9a(? 3)T_#BfYzaw . $c;vfgT %JAPH^VœV\W67('Ac^Q1eI3if=-vVj^Ĵ_׷ wo(FC`#D V1vBOF?L>OkrLaM(!;?w h>{m6,]ٹU=^]E[:#_O "M詃Ї:B:mWDWs˒J*h[\-hZKoN\=fĢ!cC4j@P1m=HGtރ5ܵ]U}`;\ MȠoD+ɇƓmqc`kʎ1 {h7}ݠ;OVqb Vdn#6vݞ\ΕzM*(j0{SϸuoέAl?EoՇAv-vua) p-_lk- Bֲa}_y478bZ+(2ѩQW"qz{exkJjȮ,N G7_ zJKsi+rzoC)#k@;˵,sN+SyD@4W`a)zÆWLZayRpX+tТ|VnfVCdVxg#>p`CK `PRιrP, yLU_9 j/w|,H`4ؘl٫)4نvv2*A.J&47[5SI(N׺@P񂧝^Q#+@z}8*C`kR zG72eծQE %| C_:*tq>G / !?*UiGc :כo%޴#LsKd)Ir&#Hg*MCzwh*0r+[@ƣ|{;5X {{5U%bNwrљ00!LYSQkIVb';knO!7} TCB7͆s59Z;J=x'oRzqlמ֓Rj7y_PFw޶i )Q{)k@$7VJnneфYM.p?4ߋ<] KJR=\qrT ^2Rе&%c.aA6?+"&@BÐD/B7 mr Hܗ5!(b}3epPGm`c5Rhr`s3dkZ׏0Tlo}.bʊ9=̬|Q'I9F*N~8-NT?|+=^G֔K*gDx!.{eUj7^,Uh7B ΍ 1;"Z5~K[H?%ɽf_sŤsuG-b[A d1dX5QK_is_~ 0WhLĻ/񮚸M|xk{z+BhҨ8\Yl4$jq X4טQ2RfՌ117}2;<:NjCqIY:ÁΓs렇MŦ7}tƾ/+8p6Q ;Vi_|j?pVpBj;qR =JP͌-tp˹.>È~/=/QGɎG'rܡ{7GMi緎-6|\-G~*؏GY?FW h߉Yċ?A!֢=[zBVH';e%;{i'Mp#j 1 GZ7fƅ%~O%LGm<#z'N>>rq2 h en|._˗ˏ# =ugx0UJ>RV2k.5Zo,j2k~fx.P& Gv:ivOS;8MQQ͆kٜXhZO Aӝ.'Jz) ݔ=G`u/yԩ<<??9+'y]|I|^ !hhN:JqR|I!6'1C}Y\:>w Qyj-x>y4)*&喩sHT3>I@i,L䅗RАd"qp] Ö;f?jyF+PF `zi L_ђǫ;Z`tg[;/Oh14]Ht5&N od _hDnbXRƁuҢ38 εM))7 "L@EkHDwt Uz҉FXm=ww i}B(06eUڡC',ġWZBQ֋ fT4Phbʀt{2&9XjvtYEyqu֫ e]ΫtQ4(*aӊ9^eZ Z뷺n`$vtH0> G7-3SxGHm0#ve\K%]d'>PH1 Gˊ4-Ggt 0M4 Al@P֫g9Xy/~9épDx84flz|zn7>=۬_8g3R<}~A/d67wq^ߞPBeEh0(&mٯ!g_\x;.#5}OARM cҰ2J}堂Sƞ[H.=[0CH,,ЅNAWA?!L8,uYw,2CUQbb̸`J/:8q¤9:XIOB$ZJR$(!Sp +D%$)^Imݦ'fyge<|d+=jf~^tyH֣P(kQT&Rsϐ|*1jtԉxI jZYOb/.<1l#<({$Q-$' U"w@('Ssefۀs-S\^ *blS]pa#Q%?F'F`F IE%XUEhL^0TM0+0w#y;fY>(+tds/95Œ,nTY8j3:١DRchB4hEΡIRTT+p9At'ٟ'| F56 HR:GYE˃Lfe0@` 8#X(9ͷ3,5 GJl/BYuf}$/n 5 (f*eψbL䳮XS C4,])7EU4]~O0- JH`O>Кo->>p/k@ɑYw7SD)BDj]8XdGQ/q.H%ǼZ*+߱bS':P&P;{UYէ W޽:*z$/m!5R '$F/'i"U>rً0{sFZ 9t6@V2BHvI[o4QdlvuRutJQ!By0N>{uW*0S7 yp[Mx0FSI&k|:%CT 743V7@Ii%\Ĩ18?6)PEX.[ԃHΰ6kfƣߣ4} Drpu~s0#M08d$|G]@K gꌡ}Ňo-)ɜڙ6[ _i4hi@w \_Ѧ=֠-@ʠBpGy#tRO 1{8/U˘@.\NO -())moWe%_uUj]eb4Be_H1jDgZR0={E!)kGTxr}VI^ciw(d'Q$Q+ɄeQ"#jf%``BtFz D6`8[Ic^fH.`&X` 2'yZ`@6I7lXL&-JܖnE-vb]Ȫ"DÙ-U: Ch_ZNl  >FUhW7uqWP{SvT 58w^,-kUblP"e^BG.H"'o&^I0x21uZV9辱R dp,ԧjuN  ;:W?N"3GE1:3%Q(.%>2Ԟ@sBUxt>%=FLj" v!<+ W)rq$Q# rCJr}p[,˸O%͒b(T*{Ony~rw<{NsM覧KމV&x FeiZ VJ87o7N^֫:|?R kHIJHZPr(PR$;ϵB-"0rbAk&wXRm$fea0\[T[ ۠[9(E>*JžE2r@#N4'{.v3,y@Z A#g1*e*U(r*yU棤B&(o6V Lɴ4 6DUD*Ug23 僛L#UNJYM\9㒾LqNH3I J)edС=[5hr9KnbK^I3rxyjey#rӪ9 ։oo}7ll~;?־P}V%N~3h]QӁ V,SW8K8`bKJ% 6,gA 'YGJ2O!v> !6oWg-Vbkt]gN\sK3&_ F@'bdQFr&9&ĒQ(A]@0J*'z*~'r>SPgԯy5/6i2@Eǿ;넴!#|un"yǤ!/ƾ!{*o]3KNZ9hꬡ¨hu6Il :} 웻F5έz5Cw$1YATni CK>+ׄ0rǟ!y|W7" 8B9w~Ŋ1O,[#p`k;?|W|pq9_fol5^=م[VN[Ϣ[OOέW-$fhnk◘h[0vPQ-˪d *rT)c涝շpW{ڍۼ9B\]c1&,#λ|岈?0(;"btwD -γ% ;)mZW膯-DӊW#0csKiVW:-SstWډP< B uIBi3HPu0k1L8OXM籶VPsj"L06(dNd*1y2@%Bs.ӡ;+4B9R{ZAbǴۊLw5xM=&IY?Sɥ4ͼL' arک/ ]St=/tRf8NG!c)DQ(9(2 a{`#P-/ Cml+t穀M ۗ 0PGlN ]ݗa4'V2őӈ Y e~FjzFEӏ;4iUqzI9ҿ/㽻U˿yfm:q./:^?u|Y@Ͽ~ˋ$&T. Sb8eͩC$S:Fpv F~+J{bcNl(u =nTID}+sA(Y'&,';Aed8>~ͦRq֋jg(ati' _(Mb>HQ׻)>۽g礒uoޖe1MTOdXnt)Q wB"Uevo2}[w7WWm:<7)A!0|iY;ȕ͛ ]ys? Ɉ7,٘x(..c Ye(-.ViPq1~iB ĩ"嵍/Jre֨iew<вA7HFk'? х+zYcs/.)R?ύh?|L>s_}D?'nXW'*^gvSA~귘I]G`eM\} `/'ftP/ TFϡ.JОVWBUv-Q}U ܈Q}v@ {/bT~` Zq5.Eۀ} V0^dCR$4N[O+. 5]EGLNeѭ_۫Kv9Q;LހHOʕ' @{[ʫ&57Apvw$@zg k•o4{j}~@#k1e%A~eVg| s;R :򨓆U#!ew>mٕBw[J9^Õ͔y)5zi}ulQ;ZVn=6:6<%}u3вN;8+^K#{AjL.cIY.&lY,Ȗu~ˊVMQ[~)^֐hEz|2XގxX~0ne#D݃!DO&wtCu.HsiL 'ӵї)l-mtE޶kBf袟Ibf|TdB kYb J.b1( tTT2޶ 0'04 35n2Dd :"9fBF7EfEZ%C 6&B$xLV@V_eD1ɣKVGs]P}fWio&otbsڦ2ph"g]@FaHU$q2N1Ԑ )<*)z킖\ !R%HAd$Uoʧu0账g"9F:R{7#vm A`(lg4XH!)8(4/|vswJ%ݵ_x{λgYm˴RK5]dRPȴ~OjCx׏^,./פB~+dhYZ&lƛRQa9|*Wf,MnnЕ7㑿 1F1}']K^Dݷ2B _t*&K!u cI)w;!\[MLh .Iɳ Sx"Eh҈d:CAAp:DC]Aڝgerk{&~8h@^RPn 4v = REi)"3%g)Ϟ#(ӖaŔ &ƌCg@AQl0L .}Isn!cP[L߲pgu&Jl3f$ˡ㌌NFS(L*7:þ 1qAM ̨H9)Γȯye#ԼCJˈ.[d=\Ŏ ö!w[QLƢfR-`աpjzX*EH WБ Lz-aS0VRekmt{zX!<:X"jp>*6KΛf }䞺To35nO\ #6}ퟴb>zX# TDU M]^0ÿtssm [blbd |JM >Ү4 (Rmphobtߛ{,d U8 }8o$qi>{8%'X*dH+1*#$=6rJS#72{xZE$!'}汇x /*v+Aatb`oyNǶiNAdU! E Ep+d֭ /)jo3@U;8ԡ_,`upf^L4 >(@ U<zd1I0=e1Yy0vĄf]Q7|9+^ 6,pga2hf,F aR%.rOcU~H2䜡!T ;bqXdP0LoʣpӄLQCENr )_'Gu2'eιo{yHA1$B| a2hM>rBN L(zr%]J(|FI4hp9#d>j\jOçx>Cm'2n'&Fwa0y '7zx`PA %fC8/JM yp_W]:V.귳Y„exZ۵ZugO-ukon/"aK1=m7s1Ķ?(8rGf>۬Skhw/vֆ" n}h.c.lgcJqstTTWnm*3H+3ys%L[+ *8f-[;C&8c( ѥO$^bԑ 9a11oFTjܞrߑF걚 uN#wY!k|Ev캘$p6%Uc8lz_~~a/xs7&م/-HqPwT_7[O-Y+4V{#7N,٢ -ИB8rGՀ:V9g$3gn'f !<7:AE1x21J«t-a Q?F wQRSًzsyѼM7{,F,+*,UeEPjfȻUAW:+asxjV1x|H_%!|s 1tG3=~Hfs$]|,GLHRt$5q"gTS eEM߼_$.0W%\۱zt|XZHb5RbdIcOuX3YȼU!uL ƚB&Ug:2eYyL9(Y}c#5(|ŒrnS`O$ϹBiD0]dLX=r(%2XwYC30>&@C*^;/Pgdc@Uƣ0:LE" سC4\\\ "mOPg>( ~ϨQ0 _ۧo6`Q_K#%ϸD?1.`/de"Q^>yРU, bdd-DُK^c4K*;LBEĪeLߕ(ZqnrTaҘ+'creD+RF(uWM(旉pw$#0JUʙא~TIs>c] =4 2れ:g""{&%b(r3P<0&g üNd0ږY]\߬)w[ou6iӵ` ktssmÚIvƝ3x@>:_FI0ǜ!'R͏B+ Sq9"p~c1rmDxW|8?i1*UY)[wƀxTy$LWl"[q/=ܺd¢L[f~t*m=z|m)pXmQE}>n\,SCx<"kt"xXZD%5r,hSC1\Qe >02!pd P>3{0aUgB<K" I@$RD~C1$1QAM`x pRlOSx+B: K1H-GNt'A]⦡4r-)gx߄ޭ)Gw.$s,;w+~TVB^ٔxh׻;; [))Sz6a/LhޭAS[ y&ZdSs![rBVA#ǻMZ>w+In5,䕛MDbs:$5lceP ļTثMߘJܦ/C[a9T]l q3y)I W^G+bLD2^}E_DyBτZ }i|җ UB|Nf S=%u:L߷YEz|Q/? s/aE}/L~1iԓ%f/)Ft`#*zSf6_$?[ )5 U[TbMXޙ<,f:E*!PL!߻r=KRl`IzKI0Uc_~JHc< l[ޛp)Xi'GqWES2A:-DԜ$atwxߴ$OTV}J}g/O=+#> |$\#\#nw1ڐ b 8=Y`T{ꘈٴ#yib}-u{&3^/Mr~CztV u+z7R絣*die8DΥ K]rhuO} хb dPChI^Z!vKt(*_"Bo勜j2±[9Ou€<T~BǿߝǞ6ᕺ^o+ޝ䰎2ۑ CvPJ)ENiPj`Қ[- yh$0 WJY$-LxVh w:@x$W2`  7\N3ƭA1XO=T(SHDK,% gc%@5SD)CXX捋0qXAϵv߶d"Z2v Zhb4!I0m\0 F" p8R ]RǢmۈv]^n1mժDR*zV _CQ@E#- .hEcP4{ BK^^2*)VClzCM *"{y ns37'7/qTn$@K" rPpxj Acxtɩ ) I#xdg0r^'A1&Br!vl4[8>akd'/o>;Jݞ (Xp<䄰MiʆptS1| i z(k=Ji|q;N}}kR[:.>n[^l4nW.8ۛ?_c7EOJ!5zsfgm?^^2z0LK_ A ֭?aݩ !֛luxJQv៟ͦBȎ :acXHdejF] ' - ̄ECX(FE {}$&N6VJY0v .r/(ЖB40<%KtHF:јOGЦlP5߾O'fI}s6n`Xӳ,z3mh'2CnjY*4bdp}KӒ.>o]_ztrdr߁-6Wor{&]Rn#FrY4k%ktsSh4RQh(sw94㩮6 e|:zn{H}Q!Q,&\9&W! j^sUDžBI_y쵙p4o`(H:sAXzHLJF  @Ce}t&}§1tiЅi0h̥76=9i G ^,Ya1K3ysTsqp4^y(6zqi^規#5|#,cb(BY_Z4QEc%p< _Z)c$OgobC-ZNaK0?,DQH-i 63XpU }CwJFRd=ddv;^ x*[Bj6$䕋2Eg&R[Ji"k}:ePY@$@)6@hI`4 f@`n9hENQBOye: JtA4U)OE _ڢƽPo=R|xLHr?m~\QG8>#9_ \[ 1_HC"XcJTR҆<15uVLI0p\Y3%"JXgr{rrЩ Jj['!8Rdzr{Рذ~3 3굶ŋVt7 OwNd SNבG~_ex{:9D-p:b% 2 w %И!m2U'Y!ȁ%/$!fv'\U!3`u+Jhv5"[*ǚJR[\XGm+K k\Ф.MȻK`ِI2#. Z3eێ%l!!K - 5%טjR9T{OHsa0i|eRhՕ^]?xt_S{6u X=?k7D}x㷤$<}I߿'yrI~7 W˄5g Uzq0<~N۫4T(!uoof:vߌ0p?mX(L(_u0 of>}%[n*[mr湺ʮm2jjZբ⃰c ۇ*{$+ U$E˖,wC&WuΪaKpIݥƖ|1j Io8PױlV8n4Q )/,MC (T vWLQR/?ޚ !6Ju1TG]7|IH6`tk;JOY1[ȵ?蟿5wo(u ɵ.hedj碏[Jο΃%`%j-g ā˃b=3!1~4)ejpqi_CjIE»C:o^¿:/+ h!'Rw\rF"a|BoHyM5CS~7&`*TJ)dcxjH\(iMΓ5X$J= e@@'|S6kAy`Af6@Wsm9  2fR*e[e" n:=V 0@O0D5%oMՄ&گ}!?9 HT(i̞H2qlnZ*4d I(XdP1X̑cրJ)-7`;1رTƭ'L{Aʵ[//qbNV?|aAwjwW7"Sr$2eM25AZ ݞ?R9vi c|1ch (c+"KL@cp?r#nBM4˔noõKb`/JmQN)Ho.w&@eƁ!{9ml26Bn]֊[:ZT FG[0%("ZҰ,o7卆cIY\@ V"8i8Đ fpF{"XͨU/xe-R "(s!:*3RzoJ`.ь;%2NI"C!$KwYGTQaGcՏ섫A5Ö9* |@4~@7pޯ @ʣw? Iٴc4 {@7N}Fn?r!w6?_oRLp=\XZXvS[v?m~^vՅS"Ť?I@w?2v{r| zQ6܎'2sVZ"SK~jԯD.s)Y8":'B)*5|Lb!t#/ R?A6H_m4l*`NnǣY[QQAtI9q"2¨g)?%vDBl Rm2q$*nYqGy? 萲TNo01^bd :!) 7XK2t >uח齈q8L󓦧R5/u51i*mtlʌ}}R!@!L kU'3L(-[2a zQ.he0vA MA_Z(:srG Mi}adddA*v<&pFޟꥑŔ㕈!!jf 2([K/tT+*Ş>%"O4 Q-Q04Ám4c8T@&J1"*KڟjpSDh6镍ё.8Ip:a8S 8U~`C~|~j;ej~ڋmoqTNMZ/'w.0·ϾM?~a毀Wّ1xY97Bz^~9iA_q|BA%e}BQߤW ~v?ICKFߏyy2O\̥g.J))tfV΋ZW.Y2%ۿ &r1#:hݎOe%׵v^h]ֆr͒S7B1[&D'bTNrZ[ !\Ddj_vXSng4n̑Cn ڐW.˔\ gqxIN.q"1oi hsj?Jcww߿5#s3/븴I݇?#}ֈ`YAygr K pV+-п `s={#3nFe׸`!UN!I2XaGcE3ڷ02L>No8!86ف)= vMf3t4vQ=#5rr#+K>!<—j#%p^F~eN*I7reKILap .(cSђx3"lbE՛h0QxL-TqM1dGNeAhV>e$C]dVF0aAE^+VlY -wv8py8*֫L95Z"kYYQ ,(E"9f:7:P&W6./L ~㎵ZzR(CB_pZ$1m{$Dݬf'.:mw?M̵>@vMuؽkuM5e%FP :] (`EW4C" Gm/3hP%FeP_%5k‹2l؋r4jWi%AHujy>⬸5_nh\x3 sOe0x Ԅt";8`.k\RKguCV쯜H?]}|$Ϣؿߧc_^ۡ?~+WIz=:8i|H*}PYZa"ٻF$Wx%}Sm v6eF-z(J&),RTXUdF[G1+ȈȌņHAR_I & ~IP|O۫=rȄ`ݓOōuX=_~JN(:!>Mݱ%ps7Ʌs>*#pai /Uc ôSe3DۄljZ;]y3\LA|JQ畒z-E޴Q (48x2XE8ʛGT"Zq}Lkr2b0\- a" SQRcCQ3MAx9MW|API D@saV8 sk#҄׆FBUDBB ORGAWӧ3][=g`r qӓ-$M !E+_l0'^gAǹ0-C"(N˯M#3뫳3 )UIb'%hM~S:*$Ur'ӭAF'rXGe03LFhWX V :*3RzoJ`.ь;%2NI"C0FDL X-4TF9-Yjj _=F5fZRv L{*`8"0bHF^3ԌctXk؀jI!ihKq6RbD8",'D!=oULGM91(Fܕ,~Ĝ%T!o A V.#D+(^ ~ ֤N ^9mb s֑Q@8 {OBRX<)=Z <5 B.E" mXB@(~ZKN*jnުgqzoFB/w3yxK)!`s,#{)EQotTT E 6MM0ͧcIYA5XฦQgbf%p)+pnrXͨm`>jy{+ѯ X{]!b%_Wk{n?T ,m'0&IZbY,߼:ܹy!d`z5ٷew#?/`t>6g?e߆i#0Yo\JcgYRsϻa?X:?*X7UK|9,)*ů y"Dҽ^ڍ1^iuBi:t Tv4xڭ y"zL ܕmkb=?% RySJbwwRaQGYՒ~RQBu@ꬂ/0]y2^! ÕKIc7ᦴH.՜]t:4I/DŽ6Exy4R/h"dCMfPNv|@+%J꓾'_IS7C)C=7*N&\Lh FN&# +*{` 31,:=Eom.UM0ɟU>QζM{0ӽJCJT#lOWsNN< eĬmm` RMQ0*ԯwz^}w?29@?7[eO]EIⲳ5TbK7I$!"b O ⌬rA?$wg:"hc7WMhUL捭 JT&g#k+,(@JDXs}IHF-{քJUT%A]$Ag=Қ48aL58ZJ*%Jd e*"Npo'w<+?YJVr[x$VcsErgr+Ue9vq?Qs5<=l$_:mjǚ#}YJ#ؿ[סO,o)K^x=]r޵Cz$J̑Ť^&b}NdͮA/'ZFB#ʣT{_ֈZW+>X_.JdŦi#r%ːD"|:a: e7|xcr1]}^[q[b AyHM=.ƈa%lADPliAqd#x`-Du9xڬMTVJMrhH[kW<[fa7~BH3ܡ|zi/n,ZGkD>G@8ŵ/s j=MI/m6炊_CTzTl˛*#pa96*BL5gah}a{IТ D(mC8r43 # dh!Pn^0:\*UrRuqF L^v~1)h&lVf xցB/5)3ԂG`G,)*4 I=Āghackf&rp`~Y[Oa}Dg @k q Ҁ0RU‚i]3 / -@CE sLX",ǖYZP^q$Q*`SpWKghb#%8,5G<#qG;@[V?hg2}$iЈ DkBzmP)u@Wt%= MThI\PT%\ GHlprK#L[;}(BWkȉ鍙^vE]p5\SFU*UBkeyqm'NN* qy$ѬPډ&f mk[ s/S 3{ I*2 10cr#Š2rwi87p$G#SpuA ΖS2`MYrVC9ʙ]a4&HbyȻ̖}0c&za}wb2?"G^娏&9=uۖ] : .AdT t1ODԅXCᦈc2Tqg_ ϟlU,w L _+?at",Lpܜaٺ@<ӯ烿t"|pTk0_3'pT#^Wq4翛`M{LƷh4wݭWz9⭍gw+,oFHAh*5ANbb5D3HQq\8YqsJf30 %O0)JCXR?jES=wub1a#v<K^ٻFndWY.$Q|;pfI6_607Wgd~I=,dOdlOE[5y_̯0HHtکPe ]βa9wN5`łK+HjKUJjv3}\̸n?Grd .--iBtlzUfnvf?wdwᦲf@L`Mm0v9/RܽЅyugf_ٹ6sq;hQM,x1u?o s\pۤ/fisi=cG3T܅d!_&U{ލm!OR11gxSγy' ]{ n9,+76eRwڝB5?}h5+50=-/oT޵z D)x`J>kTe+m'Pacs6%np G>hx6̗ݞؗݞė1bT.tsi!rO0r_ttH Y2: \&}6BJY.ݢlj27s\O5CS𗗈 Uy BWה,\ލJM$kNQ ;hȳv ɔRՂITEI&>#50*)P%" -r5lj!B :(,@0چ/D(os\9$5bbpkgxce8J*:k)+d'qU:b ƥ^ 429ǴDDsB9ҜCR^ Rh6E7!”{\@)rvL.B r@*J*;QyL  J:$PP2Q:EƔe ݍz:=¢㭙/FSv Nm"ٻfzy0\i t8QJR4N kac 'O#[i X`]v iy,9+>zr SEzHjYcUfPntqSfOe-_X*P(A?N:ayu <,F( ccnx"y@Ɖ KOf,96[[ Mm#L9]:EBv ^W@9_tb0z- $|xB$vR M}/͖.nL@ _0f<,K cDeZw UQkuicŘEE(! do \ +QIUacDSkvTԢpqf@UF*4DPA)j 5!5po)BQFzNn^Q 8&Y<3cIqL,Zt_I 4 .i;0( Ki8Ax\()0VvӚp\ ,䇽ݚ}p]uMd!+J)쀵ӪF؆C%UĦFhf@ Tdsb!^g%HYPUR|Dj SS@3GELQL 6qf?5Ir|q/r. kt%#xunb%F^%:͝!:FS7mmnK:'RM5OWFu=z]=J]9ǭ7m,aw:}blz^^ߞ{YO~^K>Q^rID|A^«2) 5Dy9?mrOMu(!h-T !_GPhv%]c$txrA`?!7DJ @jUG!{la-^SlM*kCbÊ( ^5ٴO]aŨaUVJ˜%µe ZcE曄#&!:hSL2L?qKw(.i:q7[y;l;76Oл} ih.?OzF߽lv {|S(RrwWYGH$Qxp^ңTʣOSGY7d~AFDE:.orJh0տ]Jo7F3F߹ۙX?g<ؗ].aJ?Hxvk!G{iIP&GU,99rO<^_܂qzwc|gRZD=. *1.jl=4.Jf;QS xuB.zY6ʏ:WHc U<2x=(LCR yjc|`˃=6uiCgz1~l/6FDTkjD,zq>놌q3ֳn=8"9 Mc~3k3 f BCƜc4 Z( iAYgHh)O/c;UX ^1f,;[ccYU=TX jԧ _1*\= IYÍrս+I#48ZXGS؟"Fh %䰴OuSBEbˊɄ7V0 ɽPWXM% 篱us# G 9߰7 eШ*ŧLg_|h|'?_29%i`E藟;j:\E7gS<^;Bbb6~BsB%lU>Q)o>+I'(]Տ7]xv~Oyە|(M>#t)W񡤯NPrXWnI6g}ƽB'nN3b 2΄WݵwKorXWnmʮ޲=є0\2X1*dRz.w ڋDdj=J $hLKEz99Ԓ17I#i֧cNZ:@i&Q|8N:Zw|`ؚ2V0frP/蒱}F9삱CyġC9úG2s۹7g$Q'"Q<ҋ"IRPm _t~P00( L<0ޝ*ꭀ7WW9!7t#HEy-nXlarq/UQa Oc{2) L06.KQWbD>QVݛr:=5-;0HG<AlCt7\ձ3'q}ۡV9Pgϡj#z/YvM葊1cyuQa1rQqDh1@QqDFQqT1RrPqn8Ua ݠF]ػ$]yFmi\dו%DZE6$G;ssK&5_ EnV8kWmF|)/])'tPٙzzB'#lݎ59B0\/3,?kŧ1X ip?q}CF8"[8#$=ST\`,M"/uԸ`okcrfIPD;?1pF M-:S:̮榶o=odl^K._t~K;W ]`K]T60ƔچDAԀk"p*A 7r{2 #XP lRt1ZrgY̯.upa>*F"$H|<~Bm; 3 .`#G #{+_]H!~V8qE˓̖^j>fԴ"%" ƗJ`DZZ#5!AjA5IR [(3?HP6% DqF&Ha:t5GI1HIVՖwVPm{!uWK6m)idj.6e2ؠ1ׄ3YR \޺?TWʧmnoGo]} tG7m mCB@n t{?~F¼UeQ(35dZ%p-EW<|</%EZC" (SkL@\b6 9I3k_5`9yԋ8F47#Ie^F f" XxěnL[Gw;.9Hb~s1=.W>wDAmWNy-jL//!`pf(( LɎrЍ6HO]ΛֆLW|n76SS]Mgcg$]몖j&㲖JY`Rr]Lwl>2^4n.;S aoPM+­Sm.B -2ήoGxo¯u[W9K\/s~X2VԬWU-5…KTL+;jM9b (7*%>X{ć4xDVJP8JKNE) J+ @6|1V"iknfaV B%ejL m j *S6C~4|.sTbbsbjrMbW}}[A0N/hlaqd1P`xT_>0)SU IX% ZVa!SW"hQAk=FؒQY?{6ßnoAߏalvv '?Vg=~Քdi7$EŞ آտRV06J yQz؉:XZmR1x! 615 F9RL`3l aE"VV] VP z Z[HFm} ^[c4am 3i9;Ь"a/EuǺA5`QUMu.+ȓeUظ|ӕ^L! ~I'J_2C#)#ɔu%Bry՗oAR.M[r7G֭UĪ "h^dQÍZ0oFfmC-%Z%!K8Euʵ>B~nto}geu`u@4菭Cqwx`Hq p"H7~B9n!-[Zsƅy<'QN κE%vXw` :7˜5n6]cI }?Rvbf< 7A]= Ѻ(S8Y?/>/@ykf51Zwm7^ FG&sEYnƣ>r‚Y̲[`̬pc 1Qo^\RʄvR9.) 4% /\?,9zwX)QCzw0:OT!\H!Y8,VrՁ~V 49+ c&ysKK=^2ƔB95QJ(%Yj[gµFZQ?q5z.6o(wlcƨF=~uDFÈ@HIJ7UMc;*詣Rj)`.T;F=v4/aOBua8k3L[컼X-(1eȵW-umW-80v΃%,z\?eo<O *kmn4Fˮ ֌LݚGȴ~GVQ8fH|;P\WRLy\׿xT&7 tم= sIR 5Qͧcpzt2r'Q\q=Npnj4E * e0{s_h"0?@XҲ[Qd6P)FB>Y3[K}3q%#yy֟=CE II9VDNגt"ert!j-R^9\x Ō1\FES)~8ZWJio?6$\T[e1c6-;t3g])1~6s N6r_K:nH}ǚ0-__&3ʪ~µGʮlX1Gd>wwoxRIfҳKVVzzW.I2;tI6S !u4vVhێ$-Ac*E=F aZ:yEs>Pw%Stuwkh|}hS8dIjQ,+I/+&re"xrVÛkJ:&6!б>HUAuڧ3P+ZB=vuoU:T6jA.GyjGIwE 9iqYNdZ"{iiu}Ӂ0VmbF-أ)>m˾Fq♗_'+ri4\'"kq-sɲdϋe?XcI|GXisXcE4A&b-c4USTfs%4RBO\UO\U5>QB̕w쨃Ђ5L[b6 ᣻( y;/$zJI#@EvC#k›.zuI|҆#,RupJg{{xc4V2k< ujs$S3Ôtj "4o8T_[hFUIW- ~Aڻ>eA}B7ۀl6^rEdċڅ#FF87RBst}"K(;d jQUXO"7 O)grf @v`S]nC}J8Eps)=gu"A).E02<7w\!KYp*4SX}( bMoXFQ*Ɨ1`:08 h+|NEc0PaaR5U/sRYR* _I1+$ztXZpk& %M҂T*jͭ[nSU?y#40E\+͇5'nw%B 0맣f,v&QG3EMH+8! X ׂ:g Fs sUa*;_`T6gDM%d}zJCrlM[ӟ~\h"?~sZ VX#d'6 "2D wޟ_,WAx_꾣.@˔}v}G)r!%irFxcʕ`G/H(np &V3^)``^CrSݠ[;&N *gUYc#QHS\ )D- jBe 9c PdW6c1ɋ̓#<[Y8 L =RZ̩IKҸp` wLJCΥrL(.uR BLQʃHss׷7lǒn˻ "UyVh; Tҳ#@_G:<qk wJj/󳕹;{Dqf\Beп"n}Vڇ: ,X~\-5kXS Y #|n v uj]>=^H^ $Rc4iKES< oV/ECfgw1=;}Jt  }-{2嘋Oמ`nxDŽ!Rt]eMo8q[:һW?Yq/7n-B{"ȹGb=q^9JD.|T֮)tKp*03ʖ:!E&̵վ|u6Fc߃ 4~z# _I5 ~g'1:^;/%$K)rN@s[P:HHH#JJY3jsBBKsM%;ZJJk+h˅ O\ҹ( sB*ͽcqOH9;<*涵 <%{l0LTzba&s9-r0`4,KAhQX F =%F N<%ȃƂɯ&ry=ĴEI}G=ӧSh{ =}H+$j{vSr}R1wnG4t<x! VBiKj "B{A|Z2|6jvSy#{ȯ}"\3q ߺrV9iĕq -r@еDth.=E %+ [(1e)EAr q -#e./hm.1Kj'i['6 [qn&a+ᦄݫJj-`ĸhr4rkڽZA52jlھ9&'/JNW_?=:+]=Aܾۼ0Ԓa~`0Fl6pFP2"+-QdpFq-8:^aٔ8YGGjhec,d!V)v9 ><û3G`csKkåQZV(H,z@a`0A=G\kZ E4QZqoU q|Z vE͓pPtxQ;cJekw+"g?Y.gokGB K*{`I}ޒxS܍7ǚ!X&Y OXu'IaAK8 PHGdC7Y1.+Ҕ^YGGL-^b~$lݧPÒJ l5TQ{".؎a%a>L0Wtqf1iB3~]Ȓu-H2DB4C4S J\3*ԯgCx /mDq _|*[E8vԝ{"$SJ%x;r7 LJ6k;>MncٝjXSQP|~$G ) Mxe:v[g%BXxmRbœs CцSf2cYn$r'XҳClxnj[xbDK ~HnE+mϖ#EdWXx (k.Lpb&ZwHjTLjOnV(ox9q6Ӟ,tLE_z5_fբ< \6[ wZ!^2ʰ&7k~EUd{C]@W}A@с}sA}0fA]cf.$ۓ(S+_|r(逪VgTl =6AM>zo I/iֻ̾z?k,z;M7gm9쌟N#IcMJLjAE}6Jȯg޵+"Cqv/7bM("Ѕֱ6CJIXsI- C3sTg={x|zɱ,ju_a4z0PG޸?zuz`uIqd\*"w,_ܗ}z~9E>F!>)xHJQ允/]"0$t+5W;|{rsy&Q9)SO r8c$BFs4 01H1վQF.hsO -=K@i)?,RLpyfZ3<`v hIZ\2C_pBoQdȋ=8gU`6N +AI{( 4awud(x$D\IXuQ5\#&Z0rG`aX^=\F !"IQ+YM!Vhٔ 2 Ț{9Y?/c\I}OEm'vx 7-Ύ'O6$ۇv*]0 $l uD8{ o!*EAÉ4q2\[ Q`:>\3 Ki8}/*kq2nrjmw1Om8Ix0LǖǕBBI~]h7 zU»QL1Ezm.dnCCۅt+TƸ~#KcF;W`J dЊȝY^j/Ts5'ɅQ JZ>&IƣxxG3NƶqW %;=G=)#Cg̀4qjg&@$jIxȮ)d̚$EkT^clK!53[Ghaе<˖ E`P.-m Tgd3g`-^yn0k .ż͹Tԭ/>j~+ZίD (C')\khPh%@5o,KNoV41x. yL-#I\`#%;@6.4tN9} up' yk> W6SDE)éTjgN>=]9l?Y ):eq@`䮜7 .B[?SsIWk>M ӻjur@]*,q& K e2oKÉO%@{tfC40qw!Y 6Gl(;'cﺝI!$4߳({e`WIQg C 8MF  ]:W ̙85:W|2}MD0N5Hy &%/bPf#{Q;4U+>')5kĆ++f>=)Y ~1>xOĠ3agyƼܞ)J3i4D`QE=ۑba}e` L*k_ƀ\y=PC,f{3,7V0e9,槟)2߰;\N>!\`4]+|lX)dž]oH@xԻ% *8UFmIwGd %L貞̝ ԝ 'Ca BF8\y/ IWr**J1>TQ_x 2 $p?f1m}Ef$E?~j-&.5J=ߏGW[da:m.0¨$jI:ਥ8`#yZLTia tJU' \QGϹuv5})&Qɍ4"q4n%`vcPA&˱^ٕF1OrJȮTHx>rWܼfLP#iǁVJͧyt28*|,pC,m[ fKV!X}ژB@VdMoյyMR@Ķ X]iD>*T:G?*Ӂ0U'p**8^%Βxir89f+ݙY6΃]|3?IE2,:>6얏Y~e7pct\O0_/ e`L1 L*m}jcَ6@0nvyOͭ{n.b+~{lvq%gq`蝄EZ @`>7x'1t6Ŷt"ҒP"X>^Ҹ -$Ǡ{w ugfpV܏qVf0I|;zh:y wp2*v[S+Nnjl[Wo[."?̞WB؂h?P906{e%[]ah9x^N}c{Jc[5qVݝ 8&/IyGkHd8l-y'V8RqFJ"3zRk:{}BWe;GqNNwjn~Z W1/mx94.] w\U7?o-ҰiJ$_]ooajA(LW7jm`s@~glah^1ˈ[Ø',aR7 3= &<ׯEq_0 T+{.0u4ަe%M @Z.FS63&9䳷./囶.,"\ɻ3 :0,.: Ng+`_ŠR.m~_˕#6SWK]b׼S*nSEzˏoDO36R<:[s*iG);Vn)ˆ`?T).:(`90 |86)}_'ȃ='8њQZ?YRP_lzA%X^±X3+P]<#K>nVD)$:ԤZ u+@|X{\_ݥkRfH.w:}Z58 .Jl񇝨EZ_h ѾErr*gdmu7](9< hoҦ%hDXXq#&r߷)snyb"q#IMvP;{λ<ӥ,u\<K^dƺj$=R-UdiaWWӌ{y yV6Ket|aΏO;y\4ff^Lgpyh$W@|Ё\ާ~zu?3N? 9 ;jA^iy"|7LzMWo^_[wq{^T==|Pz~cQ/{\H[]r{,Tԃ Tg.<  mI4~|tQPzv~ '/І:nG}֝:AMvs_ţٽ8T`&f}/2NmyOnZu3^ޔ4|暷|wuf'˛gc.jлSoCS "c!m"; j."Y2Clɀ.W[1wwBs܁p+u7 .Acw&dڝ;5`~N-֦YtДyy<Pscf|-_ܘ#~`^u3ӲF]:uN!2=Gk=D"z}zrP&7>^w}fPd<fppv}6؟R~WyYm*X^NZI8-SxIpR#@55_:=9MQ|ົ]g3êxdw_*9 P/W7!0S%B=~i=I/0/qxߎg~kv6&I΄ИWF-\O ~u &ԙچ^xh/+׬LF哷?WIZ.<tx} <;yy: L0Kؔy_ߘ0DI %S\nAk_lGXOPZ^L,`a 6Tϋ v`L}~KG4K_;Z q6{x@2Oz#.CPZ2xTK.ɈF2tXLX-J֛ZxýNfBqg-priHZtyF:BD(wW>aNB!q)Z`ԥJJ"DEEBy* (,Nw|>a\أ-qm" X>9w5&ѴcA[p 4FMh6&˸6Lxwv-ssTp͹p恮R(R0pwme&@}Q>$&58'/Sa9D=InKTY.龽sN`#dyJ*dyDhh}$2x9@Nx )8ܸ06nt@e:`,8UMLJiwH˔SbFI$B464F1Jr+%tLk`K qsL:'U1 SL6GOO)^}]dhե.{S4Ȩ*.g>K ouFb50yR/)qRaI i̵ВRgf~5k>F#d$2I,)fFh5Aˬsi)Zf38je\.&1!Y("}}}j\tY*ޫ|lmEꏑ^tA*Y pRYF^+JubFV &@hvPƂA1ORS+`JBi"="-7~Vsg' L( WHbqLeR37@@0H`4VS!GCn7H/(u~i2-}00$BIRhH$ #MQTDy$P7y94=Ihpzz}#Ɩ6Hn,J*Xጋ>fER=YChQDށbƻ(ѫ )6Mbzxc}Z?׸?! L~083_ 3n ®L01g1CNߋ,e}f~duZg}9 0 >qC|v ;׏[ _CnYou$Ƙpլ*w3((dP`A#7Y=̋7؋v`}7owǖfo<օc6:܈w;MO|vrWxpKIDRH8=Wz԰փYn&#;vm38{8K>~3ԣM[%`fQFş9E+I^[J{:[g?_,.,w>RV$cn"l 8@p *?/?lKlϯ8{2h >6*n/wްom|c-W oJ!e5*fc̣ }D,~ruƳ " V`~JfBt_fd-Y2SqJo5sJka<擇 c7iM}t8-~(c']f火fWÇsC#xczP ",,1W ?$T( +C #dkOS7;\O~ Y1qBa}fF /soH~ xO'`ϺݘDgSE)ؾvxm:(";+×nn 6nxmеo~6!;jm߾|٫a!=g+Ahl+{g>){֊SMd@A=+c\jDݣ}{GئDSÅp ۱~dL?<`JJxXӐ=<?%+JkP缻:ߝp S.<~,PK Wm+?/?_}׷[OYq)۠>BAi7V=k ]k w"ްMVͻBN1FH҇fb:hUI\{=j @OeKٙe JhdlJ`aE(p*CBcF0"uR`kr=j ;qZ6xE&JeN~N0&6 U@ÑRzOEq%tD < pP{ 34#jNQmӞ>YW#SP&;0um ʌwcA('wyˎ>r:v&`Q\+u7KH*g}By]cEl,"6"G"j1+^5+RVԉ 0ebIHфCk`p`F \OhW`T9:.P7п#߮f4BmlPP?>j\)o<gOQr%#m -sBR3ɒg7V yy~s[ "~ Zp)Xx:HG8'OAEhX^}cL~٬"0n@U<2\g)-7K}Ԭ 4Dq$B #c t&0VbfyQh.()O2j7x?ŧNx$ ێ=zi`;@Թl*7֧ꍯUY3e[2oL9-6Xb D S)D&cUx R,$IM$I܉YxuR^P>!,tMWMq4$Z{q&lenbܸ@Fl9Q1q!s4!zw̞iwZb͋ny: KѥNEVp+J?umFvxk5Apz.K5hޮ-3OFkc+f=:`#O3 ]>vKE6*]I^tpñ,5"ffa_7y-z/_d.uBv Z!=E-C$So)bO&paQf_:kz}Z*Nͽ;Z@1C4= o{X ]DR~Z(bu(|DsALqsԭBjпF˘fcGL1 tZx9J]q`kνz@s?ЍE\9F+r|Vk\4| _nZ߭X#cĆ֪ߏSIe0a΀X'(x}u]r$KEeYTL>x.eՐ͋<5 Ur5?ew8}8~A/jѴ'O0n֧er8jIǜzd0H&㣍XKDT fNN*RO5<'kh..J{(}WqeͥJܭ`Нp,6?E;"$]c+꽲ǁJn ,}owLN4HO䮴ZEn*KIfy,7r M;FwՓmt) QDp|'0)GSUNJXz BxdR'|q'nto~ExyWX] xqo.Z{ //{#8=M99G6cDJ8Є0`~"c."jrH^!ʫiI1}B!+40:W(b8Q@po˱7͆ʱ"ؾ\t4Rc ADawsar8`wt9bBBH8;8q?sjݿ8{eC-< (D#N51Zc̛.1f)Yኲ? h?.A$v߂Msik^#caΉ!&wwBH%@sܔ՚:WZ%N7QRJqΏn:P| ϼI9eu2ˑ/^ I1K@L#) IIYR9d @IJ:j:2ى v:\XI*yH͌ q&|L9*kD)(JHj%Q̥\X5CzK6lV)z8W9-dٻrc4a cIwo_  %<AP6շ[V gy."qE$RtRRưSl)0sGbYfCfVtCLe1 m3h\V4֩$/ĕzyZ}[k faa8O:ްpn߂YT࣭/^ [g LkllΫyj!AjEZѽVtoݻՊHI(dsP@ $I$MS!Y1MT‚ۧy~ >G953h7vYt5ڵfֺg f=Q} Ek4y-(a3S\f@60fᐻa$j:7jԠ$5|,.&uny?H'Y%]G/c*=lNsͺ3ܗB>sFE rl mBN!Da y(Tc"u dW3TfvHE۳ZlTw>B8=!BQ?12he&LdxbdISc .-)EN2hTV-S#E( *[٪pb< *:lEuj1<AbjRZԨRq/'Ip"Ļ70;qPŋ^tvB%Na+*7S2miƚS.bs-@cE(# iLA!eQqEHN^QJ:[`{qoS0 "2M0eTq9N<6FNU€!&bSEZ wmJhs:5e/Ivmd< F:%U u.)QJGX<],]*.djG,V|s)XIוSeMcO}`u5򪮤(l۔ זw-^лAWnG>Ks}^|ƿ4'ЎB!m&G&V- [{o¼..!/{Z~I|/w۲Bƛ+"_z ˻J_m' cwH,-N]P뗣vy\QR$QeLtc*L' Z㈳Gt3_mbhRsB휄GWB~ l1kN¹*<y[c^ 7o }~ВM,Ai#_j%/&EO4ɉQȉQ1'ƊHy b,Hk@F*xZL*]FDK"\,9NG`.gZ|b`%`p7RܓZk-p!iewZA$=sniUgY=@liɽ´)J=Dxe>^cqT#cVAI$\YPtFHb.6w WN᪝/H+)ɨm9wjN-]SSUL3U> fp >k-G F aǴriQOTbTZQZqF!)-&M sR5hfH0댳ee]UKTa̖׃ːgo.GC2"e:$u+ wלtqR,; l7baL"(3J;b֧1;+)t܁<.w埮b}+Su|yTh3E XѺ^복pmm8+59zauq@m?#_ y_ۆ51B{G,*7aN.#Er5FݻA6\QX&,%&yʨ::bmp!N`g,g8SR8$!tMy41F#͸J*w+p7²U"P.T1 ,2H _)>zὯRFp`s;oO^Wv ~#lL4?^!4Ïa{N'*/ #)FGGCm>F5ҵ9IXc [ q/&,@36|_8!7>wL#ƸAA]@UJќFh,<{ʆ dl`D^ADX4f绩r鎠0SX8|ٿ]믣;Mpb1mu:ދ7ٷ_~?gDhZn˴)ԙf Wf}s/i )\O`bx};<Z>|ҳ~'0ytV`S΃0$s̬O6t[? Bxf3b|]6Á|1Bax9;A'166Қ҈$ u>r^*yƿ"Sjsv\ϖ IX1TIUQ !˭3npujYgRcR>(qZ1ieϞI!eL}iL?:Hh3CG,:mRg(Exi:޳4Z` jܩj܏<DT+{{8%;>z tm|؇J6N0έFDZ W!U]K¶`o7zі a ҴF+b]sN}ܢ,7@u&\Ym2_/TNzfX-(XzڹpR%=Ew1e()u9̍*EW|mK.蓄U`ԺB.A֘^ 6aKfm (bL *jMC^5]!7x0ИCyUˮSTNG~>6gjex=9.[.2Ӭ6t\u G%ؽ|F/d'%8 Ri^}pKRW2xwe;)[lC'̼Gf^HSͼ;6>sf}& F zbL/i41$OO<[q@yXWj'$ ?¡VAN8$9RRR;`8W?R&dJBz/9.zerUsg"1N`6XYCmB:1\8NLrb(-[{z-y\Ոbi'k,I8 P#cEVzd= փ`K5Wue0+HłpV <2zS)lLPQ+PbA7v4:;c"tz]jCrqm@5QK)t!IZ 4xE. X$N1s0T^njIoNB kH 0A?'߷Kfی5LjJɼ׸ϤAMq=hv9† M>`4:SIK9G6!V )7h~ 2'ځw!NKcNMY(xPĚr<ƒ{ zg9[ ~ӄ+ (I=Ҡ+ jU &#MDbAp HplU@-L\PikFgo.CO(ILn<:!pSۓIKǴI%UZb!_ׯ&xF>G5~8aW@ ј!y©dd爰͍0@М'5DZr9T /`$栱cϮ{=h+ku{$b=̲JH~VI|ctW5 @K5y E [W `Xn.DgMT.?f9! b_˧6B+߭1G#(˔N)ՓUl*6y_deFBTTx[ecD@Z`YȾjGHR#c%"2Vo)q`ƚ(|cthCK0Y(8KP-N(Qٯ j/J<Xnf2H~Xۘr^JGX[ 'DHs&9 T \ux7G цt& Z_:ث"{yvW OVDN:T-G9R[A+嬁90jV \NL$uƂ I-"ȼ?kJГsUhwt*3M^n!X<ȣ:eQĺsqiJw4uBCsXe[槪ijo:%"i|Ϋ3#vR(vNM LnnR=Z"_}(S9aaK{_7ԺuP#_) |v|J)[r!lbm:]+PRZ~S5ݶs= Zl֮#60ZP}6j׽DَOOmjO4s"enRW&HMjz6LCӞLY 5Pr5?pM+|uaARW0т̂'IoGiV-PjGqdG'az\;GP{P;v864>ܫyt^_κ,O~%^\\_]='\-KG95 ۷sMKO;};=I~ }sқYG{ܥH˜r,t9x<n}xz۴b]#P49[c0cSB4u_d(MsJ$gNOìŝ nIeSNh]|ģ#}7t.=Vo ^y_~xzt9h&gE?4L{8 7?2°JOFLNw0)={06?:i͠7ONƉ~a"8z Omo=jE7v?&';<§΂]!T0Mq7=OeUeϴ=LɋuIG&`Z;%X[a6SOhbsDQG,f/Yꐕp$zCe󺢘:l2I׮3\`q_u gT>Wz9ʜZ*hcADuz.c&X’kUjMfAheRF8Tg`'`̃J|\1wƎpnӲrekܨfM6)WB*EQI2m&waY$<PH(#dD\\z\guAQ󎷞lıCW":/Cr2d +/MάT)9d pwh6 d-y2BhB٤b}(mD&jkSEYI( IO*7t bӉR%cRj07Z?=,;.QN;f -'ܸJ6,^dxq lyM]ڏ?"h XcnvqUڟGFvh ?@MiWutoJ˔P._Ϯ;E%nUU th%Ft3XJPA'{j^n9p0OeIJ &&ǚIN`jQ0eYb`NyQ}̓rI`.) ~/HQ;)/NFݏÊ#UcRn1UC5w'lyK/<|%a.`9nzj]<+xKzy{b'jƶ$guWRbEGEDXD6s?Tz}|fv.F%zD>Oy;T)訜]R LWTH˰iuHԍn27h$ncb/#:8ۊPa2Xc,"" ď-!Hu]&ڣۿ&uP5Ut-͹/a L*2sĜu M֙u͡/bgx3neL< N!Zr HXE ,\L5*,Pu:x8hAqrqkf9 %AU*KDܲXPELy< iC8F ɐy LSe,뇤3I2|1y"7'TM$a89rXcC19"= BzM1#-1@ غX\mT71A G \rGUarHzia`02p|#!m9HP%Ń$a !Ht!h/"-Mqc+ኄW^RЂez9{5Ea)(b}Rn/_DD,Q?>}n`82y_ ̮c'= ގ\ p)a2͵ۗE *H`p6u)W4@$Lp-w2&xȄ"'sE!85z(-5g TqiP2wx#_bo cX`vgKT n[Lsʛ6ѦH~tk1" RQGÃ{֍ p!-i9:to?::%Q&s?\4Z\-o[BgV<-.DZB`_5bQz-9۽ qhkw7h4X{ *6[&'m%`B͇54o؛slŀ^ 4g?\h_ϟ+?hfC [$[)Xhnuim; ,:V~|R˫RGo^Nh9u>^ vvؤE|oDX %B".&FpbHq;`a%fQːuA6##" S⍋9i8ּfsU~~hZVAo"aVկhNJ0h&N{T,'@`^ $ $ﺷ_PPex/) +CdFWy8sVKx 1,o + m| UM6q\fҺ@AX;0B@U,҂GtdZ0)=T>ǥ$8"7dZI_J;aZwKj쿣#[PROme (+gpz謰2r_K(ZLVKi!B'`%R9 8k>FAn/ԗe{}բL\wArW),܌'w !N4m B·Ί8rH/|})`Q3K50D#N%&ՃDi(NLĜnz6 㮣P1m3 [ ]%w|8\6?IkIS"Fz2HRw9qh`~ ^0_FU }P.,p#X>e:J+ƚj75䧴Vv4 Gľ1jCxQb&tۄҿ\!LnI{&c]D\xwV4^_M[UMuy6JH^fl[*:ۖZƶykfSǵx膶<i($IVƠP t[ڒ0'[RQs.QLXnh_QRG4oVPg~D5s ,&ͬhz&懶;(ndXL<+6td_Zq484 F#vQ  Hf9h`-}eU@!(teQj瀠Q9R: B1 eZW"DhI\ bn56|@{3RϚY ߼={q~_//޽evp k7 d +SYIF7.>̊=9`~]HVR;geݐItH;,{&q+)" 1uMHP +/=?ykCD]wvJ aTgpݽ2bj4O]1zn]ϒX,' . WQh݀ 5>MŽE8+PX$k/^'4Pcd$? hxӁ1uCU~ZGکO4bh棥c4.Z7'|0^ѹ6;U?FPBn)0͵a8qW `j 7Jq`{ /dظO)lh D#+Ǝm+O BQaW ~Nl6 Z/{]k,RPp ДmhX !V-k4GVej)-9ZtӳѬxc6aX6=ڼp͙$5#:|e&#G ƲhG Rם/h kmQsѶ0e)q>sP5|\=PGIi8"^n#Rb ƀ,ˇ‰|F? `3w}ݽW]]dD p2T }ġxP!A_IBIAu)}O b)e3YC9/;7^aj<=/';0TЅ;Q1"Wz@or7eN$_N N4hϽ?O n@E0{|_ah8j>ڣ0ƙBO:wC3@YFo8ɬQQw;7c` 3{tYx-P.v<[k vobmJ!˒`DF&J&@n&{zJRE(S* /<XgTD1χb{EIiOV!75Dy$}ø3Q7(  ycs:׵pT Q8WJe(5 (:`Wْr:}}<4Bbid)eR{Wb\x23ckg3=-mG/\=[ ,d[&&xB:K%2(0i!!fTPjP(wvP^ Yr~0?8wmSM+`r0mIPn+jzP5j\v mJ'4*IyvKPZy>E\~scRJh'eb-/SCY_*h&zpHYjS%T!`OoNdJiCHJ1!G:}ę$ȗqDos!M4wu$D-|aqg|ϕWӛg:>%IʀVOA2LYx0M')<#g`MQ`ALT {VY&. шf;ŐxZD yLH6f62u 9Z5G5)jߕ<9)qƕBqM1[7Dzk3=.sCO seVFHdIȔnqM:Yat?O_:ͤ/(u#u3J싵N(Ic׶?~X fE&;RzGPMg[iWH(L껄B&.z0.\. 0 ͈'li{ ¨>vfF=.D:aS/pXbx2RƷV"[w3ow 7K;=Y2T u҆DWG`'-{:.P56(hS a̤X8dYjfQCi汨;qlaѴQlw#`ۨmz / g맹Ⱥ,ks~\4!+"{S'D^0j1*N6b" ʈ+#WE1khd3C_H*J $C*UŽ𼐄s0 @4g ϛ%=geNia4 ,s:M,hil|#5&i֤M"r+KJ7HBg^Rf+T 0v$ ɯ,q(M(u'ͭu:\T{ fOIU bWUq$/t|eNB@daK,|- u(Uv {', ՜y?n; lDY_X>?$N&9{^ekIEV\.޶n ;XRf8tK}̔Dj} ;F9!e:>`+ULޯOv4sL'٪tY*KJPg =i-S%y:[PT$HnId2ѩf~)gⱗSgֻǡ_aL"gަJ +vkw7jMSE!< 5jM^jKs }5TN6n#''į$(L,nja/ T_ĦcAE>f=(_B֊AqD' 1.": 9-BSO>~ ?I۳$Nd=y(KM_:W"cN_0y¿:87o_:M}xzxdy7i+=姟_\7ߞ=߯.~;ͽ:Qf^NA:|nⷋ(1"Ntc;{>l,N(|\Ը_wi![˄.Bː7S7-<'TL 5ν@hs2:dtBi]]+PUv.I7[qt'dQKɾK=ps'L&&sj\T<412(y،ţ X;o}{70m%3rJv'lO z? L U$lc<(~|Gu U݂fq {E4u964%YóXe~ _NCʔPOP]rB6! OVd 贶h:Hz)SR:>ra|67 &8ϗC/ȿ}{ &tN)wcLs͜oi<E%PXӅ Ҏ|9&Y<]W }j\︟U1 ٳF)d@" Zg, ġPlyG&r&*6zxۨ>_c>tO^r0Q >OL1lMk-yJң?΂pwVq"tt$G0뛱~HM=wvfW 7|- /Oߜ勂ӫIr4^9( V:R+R]QXdXndgcIEHۈ^0R2*F$tr7TQ]P`εUAe;zwA>_3[A*6F9"/^[X 712Y AjAOeYqBЦ(%HBQėoAfN)=GӜ}dǛD UjaSn\0lWiQRO.1iJPLWl^ʔ`[{ c!6?gPZL]HIU#KV,P xŮj dJ|4 NS%Hi.Ԝ(Pf8 /]2U :/QBZ[+ٙ^K]sJ5܆RPđ6r@Ϟ97R{èoZ+pD೸O][< EԇH1 ϳc/Ɨ v@`JIĕ!;[و8{bwRBzT1'R B‿遳qZ0W"Ův+٠ծ9=oۡc#P#O'SP>(xX9xC3,4#LIx?SsPM&Ӄ6NiiJn!U\ިvzb3Tكq~\u}8^X Љ&*6ē7F?R|4\Oh1>ubV0,l'P.P3B')TurfQlI{g# '9mU=Z9L7;|A*hu#uoo !,5_`@4\PG4Rrz{CܜmWazWc*xHO/̙=g ق1UBGpBڻ2-q"OK(OWQ0L^{3Es:?7gt+A7_៚ֻ^o ooZ2jE_YI%o6ZMeWͶbuO n}O(xSe!FR}e܀wy:vT#R5SZV+ֹx)*fI7?Ǫle'VoȔ鯯,6Y'.Ȍ{[w RH]hw~s 6d1mK+pT(_̾cֹ}E8C!VzE3xN|WnxkCi̊B0VLbU/.$;nfH\82eE -&-ZAhN*fWS2n&T'BAKb{BŠC%ŞYz+-l[ٗg`sFF<'P޶emۓ󻏿&&@FVMUš{liQklJodB9[•SLw+ dlԮnT^aՓN;Drn61(42 ]x$ UtB0?` D{ xm,0 [ `y*6 +̈́L,r&șŁۋ~v.n<mv6L=ٯrL/\;V?(doQ3hQ@>մt3@a&Ȁ} g *g׋suip~F_\2RO m6m>HEb\\x>sCRzX.Ju4"k'%c{Xo vϞ={1ڵ/V7*Ԏ"6Va]{YUnRMіI{?^aRsӪLz"xk-¬ `ۼ F7F%\KV2ƵѽdJ-ЮCZ Y׶^i3 KFv'(K>](n㳐 x&_ʃ;UM=͙XfvT,#rM:/'4Bms(~;` ΰՍVףۭ[ Shr!<;Z2&13leFnu%DXmYBkJ Z:U{uO=^V~+E+}A M|*'mɉMSBQlKRB$S6S$=Rq4%!aHA|1bP:>kr z3+L0>Ϫ$9ARrخ뭙<ɵ냱@{yNP-ZB'm\o]N(sV+tYdhQV[Rqi l۝BE+j~ 4t>)eM$fG S "1u [aԋNxߚ\^b͂|@@a>jċ7l/Z#0#>0=hV@ࡀ0F C@f.Tks[2\qF  ~W:+&_{xIń >HF]K2 _I?-md/ac*5-L٤lTr `n{=8_ Иɳ!g}=(ƣTP#pXO.X'oRݗ~ чq5OhKz5 ӧa&ۆ1yܕN\R)P [Es :Gb=#Tjr}nLtЮUf} (!aFʙC=ul}} z(r&KVt/m%JuYZ㤕moN O2lv2>ߴnF؎ί9z{wP J6 Y=1i f SfMurRdcbc|- FR0^O5Y.G{or^7sYq<4MIV+y[󫩿ii?Y4l]#Ӵ?G罯p~ ?\]^퇓#sJ{47۽{dXx7.^]qwr7OC¸=!k ҩRǏ}4}wŚ|]|rDƣ~O,2Ϧ5>*ǿu L!}DF{)Z:nrfҋz٘pCBC 9 o ߮e?4 EeC9 w5 D-( _ţV];z o&ܙ[{wzatI:ԦKk%&%ރMn}3ozŴl؈GSCߍ>ān*dUh٫43huϓK/^}]`Y|p~߯8iw4?3°Ln}G S&@[oD!toӷǞM`9Цa<HzIߣg)\( '0{&UU ?O6tc,%Kۘo{GrgE%Ԑ@`iH4u"SNMG3S[SkĦ$Y0͎gYb0BrW|گL$9넙uj,sT/RM,AςSJ> %,)>oNg8HOdNMN,F?iw6BCܿH_-{!V[5{S,nA~$S}i4VeGpwB#pwqN^ ܅aw;–`_kܤ ;P㡭P!W_?Pd1`?f2xbqOI=E0F^4g-buDP1ϣha;{ _'~LNXNG Q9xAAo=&Eru4]b|]F'D6EH/G[:F}v}36vk~ig< fh6vrגPiy (:W'1MӊЪ㪹"وj#[@{&v6CvA[2X#(U.(tQdze!|t9DӜS@jEu8#͈VzV񲜻.Y뒒ZWПjU:A" v j)A+=I)ڐ˷i$|9ͳ)]3]<&h؝Uh q~+Ɯ+Ѫ/K 3YSDتyH y48!v<~/BV|z,3{+@3Ɣ Z n6}NVvy=) 1R2؊Dpʈ*%HDJ W)82`o!Pe* fTqi3d ̕ &HHd%gk}aiψpɍ ,:; kknE h Uy[S3I}-W_ةC٢%:<'6 ?d ӾZjyr8XP+=2:n0A`rJߣm_rc?Z 6LGA"QoB2_2z 89!ոb1R͇~Z!MGkBOx- {]'k  lF޻ñ3>n\5Wκ9 dmG2½秧k\7K*Zmk7@#^DJFr&9kd6DE6j(|L!>lL@w!س8dTj>G0zm&#lzeաo'ޣ6nCb;GC)fUrx4=p6 D&E2};>F8G-ܺvT>"tѲdFA+dR,1#]0B>Amz j=SW2B]LH`dym,E f'DkCTMBj j/1Dj^/c:Y mD b@)BP3iP02*Qkb;1LD jF@IU YxԺul9CL=dT2"At.,œ :,<(P;躕VJ⇳ )D+B9!ke(1W A$DֹcQ2ͭG5;XͽS&a(^aNBR`ØM"0&+UW/~9a њw gq"qR Ju]ULׅ(^^HBr֕|v")c-9> 'ݯb}hN;5*KHFLX,_i?ɾx\ @'Ήl ʲYR`<wmaJ./Fq<>iI4K(i@;hݥ[fchd`bQJ5^tl WXL+K25(&"4Ѽ-AS;sc ^;*QŌv wi $ټ;vuУq»P`l(ƦΏB1~aj,4=2]Y61DwP 9IzrGK%L l=G &iN`[W$@S(:TMsL*&.VXLDV5 kM9gځ&za h}͵-*PЕH'"s ք/R^IEڑ>D0 ND+%hs\$^d'X"kHRX EQ;/#QK>)CXrBm%!L>g)+ƿ_]ϛQI~ڗxt9~DRX"{ȳUeѢzt ?M@G˥%fs*%?o颟y"rt]]^/NځbR, l7O ϵ_~K5tUnYP1/.~ޕ-)Ưyʺnx2Sjze'WV+>+MgN-fI;BK8_%=cW^`q;IK{_p@3;a~x|>=yG/v2'oOݪdZ֢hh?%ѭ5n~!ܦ<-7ydCI6mw"ztqhMv$?o7|5[I>EKfX-ퟞU+7&jd{ʾX=yޔ`yK0Y]3zMJÕlNQATR5SЪH0$B[3krAr?vr'w]_.nv/n_{nvu ` WΆ p5oױPskv6҈Ez3X 8/}vv~YRup]`yLjDʝܻ?O\.Ӆm޿A0 uGE-",jۭOcrh0Fh_PG4-l<giYzAv@eErzZx5kſ8Guf==lXwZxc[샇ؗYd?&<՜<ѽʳvbS$VX:к?ɁOm} κCV>kwV0 y_&q5&ZכeCK y 9U<:rh({˦"ԄޙÝ4լ| `:X&L2,R26a2Eh&fQ.zlm/7硝,Oۙ񢜿wQZ u-z_3Ӣ+3߽IVVO~Irsr&(2*?GV嵑~X>|ou 7ҤhFheqGsq`2xfOp;<_-2VP+=y`#]RL?}?xOg:;5 gJUh4<2>Jo7𡥙x$WB:[}Zs.67o(=`5UBFN͔M~y<:&~mm u^ >r=Xk 'Աu;pWHl'߉P$+,9TG*PsbĐ9mtj ];="cfCFBxFwne K`uP[_S!#~1s֛\=#nY@JYCX@{-ݣUs86(aRp/a``!cNj͑ۓ;[J &ؘ OM2Bt\ЋTmr}%v0!#0PHnw5ȭŚW$&Fm}7=ԡ1c3yqftWI=h{-퍎́vv@s iGD-BQ?<_LG'08(sW{ 4'F!EOSRxBt'j:=BgSڂ`"U pEibㆌځƚ)@HlϩR!hO O(dĐjLBƼj:&[RPmk"gbV6@JlUP[TE1F ۑx ٿUC Ng$C}` EĶXմg2aGxԍHԱPEY%xcʅ| Xr`.\!&ye=ݔP|㴨B`62՜ b*29VF1!򪒽 %JE,%[,XeI,Q!5/n}fI.6U%N{cj/RJɋ L(gv!sl# S-@.U]U&CTk+tI)z=rdSt KK:)3،c=D6mLʼ7IӢ~#v<#p+vU[KQp8 |mGp֮54b&N4:$iƽ12inPq%)Y0]x ju2kb<띅YϴК vJb/v6bq>x6HP& ip,1ɖ l@F^6XLi7d%4E B\˚t7N t?N:7s{ĨU,xqk/L]~7hwھ_A˽;G}25\~}58V_3}{r|h7W9LdzPpg1n}7?Cɫ׿?o=_v߅rN:Wޅfi~ry;M;M N&|se2vN:,=^.,S_eAAʱv=ʝͽo㌇#1w~u>{i&&l4Ihqf7>ž^DZ!]qF+5.6%Zӽir=JZ`}F no>#\6M5ϒ@`xB ې[eexp$O{Mnɠp7~qwa =w)a 4||s1Ka??y=`t4e{ë`vv`_aX ow}+B`(R Ooy5q_rA^@I')?M!āG/T>Ƨrۗ)\po.̟ON~wVq brbs-Y_2}yN*r{V_FdQE[w MmN2 <f4ۿJl3kdmءpI2D!²`\ХГTnr_mIBm=.W1Ь #=$7 )=I ~ߔP6 >?8 .Bn]t3Ukm72h]뺠w[vTI0Y{x~1[ұ `8|<1rE&nM'ivCoP_^L.8~*C!ÓCb,n#z#S_Yʏ;mWŴ;no2"~<99zʸɋdROSi| }sq*I"* gӫ&{ƽ~>eڟt;x78qn17(%ŤpZU*Vjn1PK\ܲk[>Kd_dWӭdg+]yݿQܕrM$h(jAq:(lCMR)87J6r ٥Rt&m5\ OGMC5E<8R7ޯjLBrc |a$E&x@ 䫌b4\l>_PU<; Q8՚ M<_c~":R/efmJsKET"rv f&粧HlJK)XFbMK>ONj*L(q,Iv=D] 1+JkG\xccBv}E,,b$0Q&4oL5*m*~m*kD+<6]T3ius CJ"diq >ԉ[VĈ kOrdxx@R q8 Z?Dc3=f{N:%X$JaB++ևs%-@ Fjac_{LphgpF X#&Ԃ$0v5YN07JAV+Gҙt )L=7ą Vv[qԉgK0D!4@ 8H,2 0`XLi>R圥A- p@# <* MؚTĺT @^FG88PWthNG HX)'[#Q#j cG禙;nmgxCnxw&"%E5n6MNרfx#'H뺷@qLA$'ЄȃVA<+!uuBE f(H ["3Kؿ0Cȋjj"q-*WY{U p ۈKY*1c"x|~[$Tz \J,B湄|tqdd2GLP4\Q8TYe\a{ +""]sY2S0Js޷z9S>V-Wk`ڑqaYU"V'L&(Φtu&=:{{6V{t`rL5u)`x(!&)?U +C kbfȉMw7T9ɔ4bs`߅ɬ?-sLͭPi2R 99?$*gSr~h!ۊk /UMLHI9?Hy֊3yt+2F^q#ϛ+1UaMU#'*B njQca,9 -,*¦ +Er0bMɽw&X,tY84Se' 8WdϧiES<7|֨:BB(zޫ{ޫ-w͞w_[ -fR(w)-c*wLE=!i&A omYEYѺ6jqeAQJ“tXN:.H% l%1r.bтӡkn*&`8pS!/ _jL9&k&&dYk4hI4SO I9e8uPk<[; >Fι3LU hL"F:Ƥ CdCkQ9Ao~>KڕRY^)llR gc\ocϷTxp>Ut).ӥʝ.\[fefE"s|zԬ<,e}7Gr9'%L(o",Hby,#25 u Lu8vƽbKxyqip^H k* V{}/09;׈/ڥC [TeP2}i sLr63@k |R8Q"_Xݚ Rq.H+7h4U+z]6=:|,ޠ7醒kP- ژ@>RqNKcȨ.C(ɲ%wbA _Em@Q|K㹨['*P%pDXg!}2lP*FT2` ׼9 +JYL`^8 #Ou 9ݜDߖ")55R5~c˚k]|plY? g;|_}uɽ|n~57s^w2$%ס f5+P.%F ɟHWa{7)U1Svöզ6cBmDB3,4Cކ]*DVD^N%7T˗ 9So|2W'>V@ t67 _-gc[oB,e4Oau~H%W­פ`ֈ3|-R1g;{A*8/ogJȑ"^O ,H2v}a|$g<ݲM$le[^ [MV"*ʖ W pXzMW(%R Y!dz ci.oV_`%y*Qzj]Sfܖ !T|O\d7qky}]cQ?@G˯YIg'=XUGgcGyfÙ8Xl?zDE$F"Bw L;qh Ǝ?gaqաYLUɏZ)ZF਍q)ںPC4)3#gr"P,ϓn7; ;ERJe{-/0߻vtfs ܂|~A_OsD2TBi۞M$' K,F%hpfr8F?_)l~]P؁N'UĖ\^r4?1'O0ϣo,ޒtAa'[L>8M߱ѬPYBN Υ"Y sa-օJ$)x=].-jOG)ōZP썔 cf`L/Uecp~^9HL+M~  Jg¤,@xu"uQT;sF+3rjTtK|n=3 5MQ4L8 >,AyIblM[* QBpmٻцb7ڶvM 9(& Υ$aJ)HmhL\SY 2CEx[yq{s~Uv엻 6Hfy϶~zVwY _)?ܽ5l4žTm򻿿#rk 76M`y}BSD/~&g?}fXx6]߱V1j|{r/82 jv j`sKY}@``]Y}8p$R[xK@?x􎽻^*$kJy`N=d*/Q).D@PfU 3oY+x|5kX|e]8o6KNdywm޹zKO&^ZLeۋuB׃!h:5o+n!/bǨfn :/ϯAX\F'73;NY~~Z&MK4}u?Gr܄-I (rjli7Fv偏Ď݆3:SJqo->S!!/\Dcd5n՟8rJ "y%:ciPpޟvNn-H њLO=a[1j`Ccz_aK?lDRrO Bck ~ 3Qzߙ-d}bYOlInnݾf {32`plSGFA/6wu{|-\g|ĩl|0ɒr게p58$(HmGH Dj4!2D N B0HLzEr`JGNlN`IE,јd/ŧK(4,;cx$똇5qۼ *S"/H #9lDK%N)!rec-[ll%By,U9Hulr' (Gh,A4cEJJґHH=C:Θ$ ~HBQ֑b|7>㼠I. $9/L&pRS(Qd"O4I`HT6k{# 6ȭ["5ƕōMWbj%2ۊˬH UHr. p]p' MQKHTyߕœMWb47bD&r-%}c&ﱡg4dK4)1u3H '%DFVmRQpISRr&Ayja*The+ʈ zRϠ&*g>9.hoA,=-uFt5 iJ)mQ*SR޻?%L0$;yEǰ+[R,r:.6(!krw(q}vNG>ԗ.-%/]5 a$pVa}Ttӽiҗ.QN3%z7;jP3`뵆fg*  ob`g̑T`*J^?w. ba!y0RĞN !i؝.s 'yr(%VIi2u %yIfa9#|A@JjDi uP`)kz)L$hǦ?'[|%wAk|eUaͅIL7o/N30&Y}z}sxu1O_Eq^>W{1{C %(x@"dN3E|".i'xhw~ߊʿZE*dfAX8r^?ŪRv s2]ܮ^՟7w?voN'Ld#qUj~5ˀaV{ֳUc1a#бonиg-3W1O@Fb_XEEg Os7lѵLn3U!wfD;T8G|ݟ!9OjL.~)kYİYB7mi1V#[XFrƱ/Hf^xdbºa\tM泶f]L}0_1w0_LkK߬"b Roؓ`J L=vK)I)%bmz1/L50+`Ufnf)pS9A_}Y;Q lJ=z{k2lĒFF/- i949" O޼?;o\?^]A & 'C@Ҝ8MZQXHz&a$NC_ (pRu5!bꤑRZ!J˰p4XSqP ͍_/PvJI6yؑ ?T}D#¨nOaǩ.ȨP?O@%Ls]>ğ?%7ujtS/rN^l!Ěd PP_۲Yq˷(I]9k?gR8NFߙo_5oUXvβDrZ}>Z=* 8LS :M0SmbĴȏreg!tE!lz2C!W\|? =cRY 9I&.0I!"f/q n0_,L5,}]тY@iM#pLbXX8?<n"H1TH)2Ϲ3Z I1LbL"s$4ʒ-ROHKg{՟<ƲBG~{xriw? XbkJAOYT:A^1cU?P%(9;nWs ]zw$wO]͹;B S÷9cUOd c_ϭ> S03dfFr=z5dMWKX>\9-EREOO@.%LR|I-[WDՠkR_E"'6}V>Xd')FT%N%) X 7aW%Jy+>^iZ;kcO ƀ 匹>FɏXX86\?u:3?96O."X-TO49ׄ.`,qk+d eXbmMv&fHR={PN 4a 6%8̏X-4vw>OaOT:#w>>5xd$#j4r0G$ڿZVkC?`{/ {ҌaNH8Flfp; n]\6{s 2o7J 1o B\O7Qϻh1,_ dz&}?Z_u)qnGͻOPcջo%;hn zfJ@mre3J?`w屳Dn1 ЫQ2hw.[0饺먣㗚&3QuY!g1 d#cr2&AzhVt09]:r9J,nCWV`<4$=&SaIۢljKwx;9>sLdl!(M01 ;l~d;{)%y;6bkr4Ax\ 'vQK16ڃQQe{4i32n?rYyNKtJYS.)"ݾ~MM`$: "|7fuTi S `+r$sOTIN'ܯKI*hIY`YVz&nORڕvPİnXOSaqc)" )\v?KFRwұS&AOfL^$CXpm0:#"ǹS  ȧ.+GIHdrs!pDz̋pVpIJqkSx4˜\vEH &y x~1}1ia ٛ I))MrNGT=&",߯Nqw4BLV}0KjI(5u? E0Qcd lSAd/cꩈy|`s_0Mq1gզǚGK ^јqQPUF2b!܎K+UB-pVV"[tXE>YLi,lpF`t4RM<'߈ Ir_O5͓^uqnR2Hz\06lk*CLjz~ +bB1Oa,%ֈ_x`63rӱg x@^ϘУ(^Y63+ICcMF=NM/6]XbVdI#c,e`tWq&?ӑ7q~S3?6jE6񌸯_Tg)ߒ#-&%Vb[ӻ\,YmrB%4ݐZEsZ$BbZf,* '_TfK!fp)+cn#FiBH (9F `gN C(+k&ՅN2LW5Z/-QKi#ajE10bIFq#/2s)BE>}P9Hd|]J[+J4Utu.+cseG9sY//\+rZb-A`~ \V W#$d i岦'%ĊI@3]P"4 -Oa@j{,,:QsY_p%m:юֶ5T ydCQŖ27@vÄ1 @3]zyNɇZeZnПlj4]. G(oB"@mdqj бh²L69ϖJ v71 8Հ^GwmTzNc DN:q,IT^[]yԕg\~RA:SSRz}n}e Й2tF4RM} qgIRiGqؠ#-q :niPwbR=tFN.pq z^qPZiT## .ġ3ǃj\^:fq q"p ^'} !*wЉPĢ=t6M,bu$EOЫ[_XXv :uؒĢڇwd vd:SBaƕm[U]lrt{D)";nd)Ռ߀A۰EEj@й˅ %:yRǎCR53 u5xe_P+4EG˝tjII߀Vޏt [+[ s4놑J0 :/Mj7~ FQzW(n{7D|j2̝̳E20EWv1rM$s{KtO;gmjho_<@Li`id]V۵l?gW- k|ĒVs=='e_nQ53Hݵkj?u7< ,Yat]FAsakF{j'$Hs ?A;Ȇ흓pݾbio\9]]px]hNvǦ=M^q}̽/^7_U?:Ǘ7y{g _eh(- 0N;DaoԞ09u F'^Ȅ]twvNǑk=gQ9r?so b~a77q>_{a]D؋V\\+$0`a3[ u]h Ѯ _ 3yw`.vG|K LdX Y3aQH^ &wsf(w=P y+6teߘ"_vf #qƭ#ЊU-"/ͬAOh 6?w*}PMn,ltep܍ . y~^]' {}|0:f`Y|pxƟp5?[{?;LeoW ?Mra ńvǃI@1ɽgK*D8K?s:ޔ>Ŧ>;B߉o4p<,uLJ͂sˋb &tYJs1?_ׇMO]Ϥdbp5{4XrO~; aڞ@'sd.uYN@'m|YW g-ЩN+&묣s.F:~#L? JOeK RM1e_隑 ! ~<9:|S;0"B?Y,s]]7^4~mqv50zJQ7tMOQO@DQIMJؤZ"C+BxJU*s`M@!S(&</7N$u"yi{RIO޸ ܓe^r7Ug|6069Udy홰ZqFf8]3bw̉ڳ/SE=;uf,B[;@#RYSl2DqSX6mIRmbL+Kt$BIc[XpЮ" -R 1BJ,,d>gUH _☙?;i̝Ae7g=E3ՉE&AVWV&92O'$l)S@}*^?Oha/lVq/iM1ŋMa3&.?9{2avr ix/uk@0V~nzۻ{rMneq]c~Ed؏o{?c6fK!e]Ӽ?9>~Wm?+\ H#sqRƕN0]1jctNZ4 I~^=N[R_!B7H:3!Q ܤPx NӁ -U%=M~jzy8t`,Jc'(z{%^X,4G43d:k\iHwN^ZlsOoOJ:hkۗ+1#k:n&M W3O[U( *J7I%)V'jo8QH!R)Y&BTDZ]"!(UNLTb +S7ӵOc"@DG[\Z|J*+cҘ99~d񌨺@swK:Ѱ2BvCw\^0{ ya4]}Qҵ=\ w\nْEǭͰXUX4l5(_vnpP}qԀhC\X!X# T r fhn fq$hn\JiZ2IƷJa|: RXlY>^5m3\)!Ly8 Q({bo[ĒHy /?r$̘~BsYga8GѣEp8vh.Kx~/,$9 p8Y jF"j?psLcwFt#r%xԅ&}tÞܬ{(ͰgQ׸`]B7.0WhW]#,Tx{rv\y$(k$"hT@q)?>X' w0,uiІ.?*]<¦狚ZI&[}(ր_a#^|]cKkB}$k^L'=CpZ7ۦfrClEX؆XVmsSSy%T`Z>;)☩5aTFaU-?EeӎNROh #x%PPX8j(܈3}TnjD&+AJLÈi"le* V3R^,]֊ RRXDeÔHXQF"e7fhu(SCb0^ o&^ϋ468ڋDz+=Ǒ[RJ|)[ɸP&k5tHuzt<}R*rȴ!9<4kIV^ {db.DY}o^44棒W'ޠETTlvp]15n~)c~?.0QRP? @0q#lx3/آDŽH&)ݟj1$%a@Q9tBEH}bDxj fPr{{kʢ&\z&Q7ۨd*[*ssΤh& ~&IjQOܓ7H WVTQmev-0fTTY7ՙ7:Ucv/%aA>=EovJxCI,j~buTkq` 1\[\~n*zU}xS{'ʶn#RB]K(.+<v|`+v>Ҟ"ײwNi-n ޛ%e֐<8KA<G{fd'h4hb"[m%c;k$ (7[Ĩ=GPM:mT ,$x S]2ExjJ\y`ީ#R蹫uq3CZ(HB[`m6wL(A!Ҩ-n^.+?M[,|ǯ{wiowO|otp;l^j'޷HfTۃI4 6c/`7A7?g47u?]o07wqq}u38J>9;K?{}w7g~o7翞vodO}Ϳ;o fr׾3޷7{uˇg;YPȃWyEsFZ" T8a㫱oG/O3[%ZtL}; cn^g-wa]ػFhrY($w"Ӈx_ _ G'WcC~tw2 Oiث8壠QI|' &'=B'˝CN2M ߎi6O84aW Nru ]I[|}>Ϳx< u}}FL@h&VON3?NCY7?=?. 3?fq y;z70 OO0zݏި%Q[x6D v:/ dAۧ%n2LC^ø=i: _6a?_thr}}}+a>]Y9 "v ˜9NMdl.rkDYego7k~MىLN'۠_j,lK0M@xy'*fA5{5,h]\,2";.BO(k`t:]ĵ󳝓I }wp_ 1 |uL̼/GMrn ߥqՋgZ`EO&锹׼|o}O"h{HPoo,;Ћs+mbzN _NKr:&~O~j .ZBV-– Zߊ-?hq!Z  ׬Ca~2J-ci~\ 0ϙǐ;;*8v]hny<\4;1fA^>Q>Ddj&E:QاqR_h~}5n^'l㷉0E>w*Eӧ$ ;|Źrͮ~2neg.:']2ASPO RvYlއU^ Y^9fv>TbK(/DQ/-XL])yC%K0Qk$ؓ0ݕLݳ30RuM 8ȧ# fnb*1=O&Ct},hD5ϛ,~zs&H=h ߷//yM,$j[g OY%)L);fLR&HIq*s^0Fmא5mB"wX%ffnW >CQk7;27O:ˢ^p<PI`(F"fHDhB `̂P1᳡ZN6f-&t;"19-5V+I!gBXE"YkyGң^(}?B08ul\@#opހFR%:vm#uQ=/ɏ: $}K. k}jA2F ؏* /#MaR#)}*qG+htD ,j~Sj*gaT)~Z/٧3V3{<(G4K,KCsuw&!)וB,`qy7&f &.ƣh a$`QJ6F}Moq-y»vhkG96eSrGWN`lXi_ 9FN%<$3 ;l\*_T/ҌL\.XC-!R]KHR*vvlʄ)c:QwܙWta[v5C8uT W瀚MY1eVtE`R<>]u* ?ntZ6&,SBE8&fkYB!BB*aydx5p&1Iy]`8ޠ߲ M֌@ݫyM:'w/Mc7ې HZW;FTrQv|[w4. Ixa#Օ0V"BgiCkN)-Gv E9khFz\(KpWn.yl}&\qTUA%;HrpK`ȞZ *ɗ߳hjJMaCmQݑe6ĹTcJdx>ۡd$h(=f9&]9+Ț 蔩B(gDuT,)n" 4R.nb[u4ehbܥ!Gc΢۝`D1}Z~8,:1)2!7sA|L2 jڱ ^I9 8Mմѐ3} j8 .jr?T8F"Wn7逇xx]|3m{Ɲe”jGnq\> x&pr3yx؋Dh~d7nbPZ-፭;5GIۑ;=Ā*uh"erVpSةH7SJ8pa"* '+n[Y4O tCd[^Nwdn;"V:-.GVpݚL9 us=?MaQHj1?p‰i2Lh<骢10n&0$K̥BA 9J7 ?, ᎴfHK̥B43AL#דkUqZ<&u8Ԡ 1" ,Ļ\4 TG7w]p(gfJX{} o8lvlLюJI&tgLyHB0IBIU.G$ayIR֐DY%8\DldI!f{R! ElkvE"tg R@k|op *sIlR5sfZĂTgfL!UNFD^̽TIb &N"HLa(](H(nB% ]rDB%ipd%q AY4-( R/T؋"ɢ8S{q+#=%RLHD8A-Rmf,*]`Q4L yQEl/ϼ(q7)lc%1=j!vNX >dE92b{kQ;M\,"J`Ҏa)(#! DDLkGSMpJ( Y̅X L}/WD#XNXd!J4Fј / "BRH$HH k*#?XsL!THCʀB+oF 3FFUQ:dOA*wP!*^Ukr]g +CBk-wʵudlmhÔ"^j 62ԭH Al`E[g <"VQJD3tqCA4i'+zMm&M5EQ<"`ʂQTC2%6R?*?mg)˟[a\h2ݠ S=)'(ǔU-A-=lL5NLTk%ηFsd86-2صyF!%γQTpA*l MgP ZZsZy WFR glXRɝȬ\e VR *!L^R4HA٩z~sQ&PFQPGa-PRC3bM\P6믑FlZaAڱ;;H3j/@HVlj)>u͋yߑ6¸ǢEp g\30yPY/NRݨ,n<$FQ8PP%@'PI}'pLŽ3p's5S atLrI<4Ӛ]kOڪXc*@ȝgPDPB?Yrz [F;4`f_gdIf oq \HZ~Ubr( DSR̝0dN.Pd.PH JXKb眇X98g)~9y J~[9[5W9sJq (dy{uvgUDJiyVĎI XJ_{1+3y MdQF R'Ƀ)kct;=/8R= w0wz}}&v߻:M4'6QFf\CJ*I>o=9EGgI9W,>dwl}X,Ic&Q>q_ AUunen pw]ꝣv&[sFQ0M?iO' _*.5[ ^̉cI.؞'=û";ݥRؾ\8ئn[`:)Ա]NJ[cy( #JgO*E`>ivOO,Ər˧)!.uLK St]_J/[ٜK_Jn-<P+y +w,2U|`#؇g2,\YK9$ea&j,hnJ-6nH[8;9F}*-qG3p@E;`QltIck`VU{y;s/!SsLrڣqb+w@? 2\^)-)td$+Q_B3G0Iз1qS}d':&nBdƊJi#Y5[BQbG]B'|43bÑ?^5`0s:F3="1?Q &6\YHذu s-7c|n04U7ҊaE4yQr gr񍘢^)/G!K8EduF\hS+kI*pځu#T+("Pxn9û;:xei'%Nь5&98)2@8 GaÉPWaW۠ױE\Tw\<^ʉNY :Ns$rPMI'6?'4J*:=Ҿ,#ne}M.\dkWK&E\K\-s܃`uHO**U}5vb-,Bs|1hGO-Ad((Q!RTjU*QV.fEEVRqoTK!gv[SЩ*^SS nMT M+txNJY%Йt FcZݪBgGFmڍ^8-X*m+tM"4zi ;EnrNV4q ,BY0NG]ЉY~30gs:ӌۃ8-:B@bdUa.M11iqϠ{P7BP^h` (ei B=x/U]HיV"|)a}S3sdeّzIa\"ٖؖz7-=sNfmܸG8qwwNj,sյ7x Q 4m7BdVvF#!S_-űà S{PhTPm9,4.S: zX4Q|TKzu(vD 78?"qDBYHNݹKKc5bRD!$~@ ="ǜ@,4EgH=z]\-7bAmF)لZCP؇%R(Md1K#}04c|q> <-u-mhb.,hKP ֭=@Jl۶r]&\Hr.Y,;Hb,.nBG-.+Nk*ΰ⚜(׷lj_a{զKj0 U.Gvz6?taT[|McyvW봧\}V#7'N; EsEߙZKv[n&I;Hp-1ZlRo4P]ͷQw$0GX>>?tkHGo&BT%e<͛x \VaVNA?FdkakY@X}z03%PdՕ/$0{[o$~3{!f%䱣cSNJcd&*Ym;/1s)aT ALrb;gUV3"1BDY UЉv-iUe 7vPi*^B_ _za3$d:SQR,ϴ$: X臡EGTydmd gTՙitM$2Ȅr8 ڇMxhǤLm#Cݳar RV0O@bv>>x?)Bc.h,p^p<u.A~VTߒ^:ʃŒ~16b3ҿO{A}= d"Ξ'FB0=SP%e[*V'8&s> ŨVa͞]a4Z_76==4̺b{ӌa32;._߶$=I< -Ѹ]vrfj >V&'L45L[CPf889"=# E)/1K|Ne"{~ep2N< Nʛ(J1QdF !AW!Zycfs BPno;ڋK7~s'Uw~i\Sͅcbg;Пx jB mõBߞBY"ҤHTMSX)t,7RżBQ J*m+tBMLlѫV:0&q#\-oz![iʬiUO2} KBF:O$;u([:s<2Sr* {:nJs x e1RoZipuXɪ_D7%=qK8Z]/|bC0ؖMZz>ᕂ LW6 5&͖;4l춿~7*EQzz F7# e(G1os-]޳=K &ty <kk''um E<u5i fN@R#[ZɖO붡ں|[TK'-WnV4hN{Be|\bPHC; kҕ#҄ ..E7z a=R@S!]Ly{kɩ#O{R"F$c&x+Ku ޱJ$ Rc'^q^*Nzɨ42'j,yX*ց_@X>a>b\8}bamBG%%i_\#hHPr-ulSCPcNƷ3]JVQE]lIJm\PA"E۫WGW둀ow{U:Ґ6@ lJV0&"$1 #yL"8(pCoBF`] :8wW@4Sf0M߳ѵp]^JXQhC++C19>ߓ~'i0LG ;"}Iϗhq,Uǫg"konV_E?zO[K&4wvLc;tڌ҉^>~(%EJN& - ` ]2;`BD2%TԺ$j}}+3Hi.S̅#~$̾PjتI/@ɓ`p |AɋuSTr c VDSge'f/tb }4{|A{|0|*[Y%w6g[Q6D%[ )09lVLYZpv~q^2#\²A"xuu1&܌fWW)jCc(ǠaCx8 EԩGGfPKou MHJNԩ0 Zrw%mcmmmd߫ZQa>YJ\`qYqJ8ܸ љJ޳rwK*8q9Ñ0 Hsepr2|+}_vӍUwrkZ|pX35:F Z/t,,=Ah<wZ_y6Rv y,Wr0APA"~V|W.]WXϸX˽(4T<{͵X.+eWKATT6Ps3dKa!%FEy纍l,kب {62"n%ksGƖyͩ prnL5E¥pqls~x98#Ezӯ_{zA\=|j줎-Og6N+Ǔof8eQzrLUeR>a`R̀wgrՠ<c} Lq7Z0El0ew:Pj+30Ƽxt`\lt$כ0k# u)(&ږ$\2%O'|[^؈Nodn;SKn],6$<2"w-'ڍ8h/[^؈Nodn;D,!ulef_Q'zPhЅWNbsw!aZ‚rK9n]f5R Il Iʔ$GN1. gX"kz[ g|+p|uE 4Bb%P j-0Jb MGNQܯƨ -_J )00\JYAF/q1*ؖ^i q\wml@7-w1Ѷ'%TP6Ҵ¼Dz!ɢ{ءIԒ1BУArOkCeeî(&CURVܬ'a4և !L ݈3 >D ?8tva yCjCp09>’y}$t IDE"nр)KHBJ[Ox,y"DwAO:CW}kIer-6˧mG$\^ZO=t aѩ#e_)0xܮ V K(gN_)aNU| \1X 7v&A`ڪmpC6҄gDT-эZ%Jm22NL=陑a-rϽS:S2~nRͰ ͳ3'6e~Lr\*޴) u}_XP" h:fED|u N !:w gLD4Ǽ؊EWsN:ϗRK*[qIU'tpWU克a)x!w՞L{2X d.-!Gnu R!GԱg4Eɯ`{'dR.cG"8绘""D˗ mr% b"E: 'qu2zqT0'\*/[.LJׂoUxE3rZϱ:1$a6}|Ϧ]SzeG>ܞB_qƕ@Q3L8]H`oHL0a$Bz#)b\ " l ߇TVWu!<5FAEo_ZW:JRkBG ^3y`/t^`{X>>H$ K{|0)@Rfz(#x=cTFmxY@BYaȌ0B ߥ+}ȳ#Dϝ`HuBA۩iܒR>#V^ Ɲ V?g^%<_$1$RGզ+EM(??RjTjI:Mw݌QQsfZw"W[,xmbܬz_kA82{ہ6ϝasr]K=0?.lwKzR,u/NӬsH7jO%_l(dl^ve= KDc>nf?Eu(Za5?ZQ??~h2uO? ]&Yg<bxM?p~_yJ6g9kGA f4h j# emZ?j6w)x h˔Q @uPQx=rk *d2| ~{ ]BjO/~;'us ȿZ?^tzx G_Ӯ)N;f5_ߺ5cc\x4 uޜmOÛIK?NJC^OmtcAC1' ?2;=c^=E$E0 ƃ9VttRB\oMY߈0:GP䄆08;Ҁ;<},o,IʞG4o5>ZޝT؆sx74IOH_OZˏGPN<էZViTKq{'(3l >?^ cMՍ2)ӄw8 Í M8f^1Qg̯o&meA1ә-rxz[pCtRin^E?]s=Sk |ϟv?-5^ y=oTX67[x V.}x@3ݫQ3t{0`_t/ӲIdk~ˠ=BnQjX3(t/a@h0K`EU_/4RC(8<obyqhar}{{ yzF20`Ztb;}E5L}= j")l%bXKoοlvno*l&Xj2`n 5o;`G-ɟ.,KjI'̳ڛ8_1B`zCK.YQpg䟝_<,WnH+'qQ>#?w̾|uWWRm-=X^Ya}+i1d9ipTaAN9}3pCɳa+Qal.d=d-rSZOG+SY]?/ SUunO??k~r9wˋ\LQ\MƖ2 )bl٘4X0!0&=K޾\~L@:[LVXFBpv办ҹ= }~kg@m>7NEJMf0B7e~TKe 5d!_wM,LfFDPN^?ZҮx26⻭/BM|vܝ2uwgF-}#7aMUy0@xoD㰗Ce`rGa<90@.q:- Mungy>SP-I7.d*̼c&Y@ڡv,mwv`ʱ(M0nېo\Ds{oq>n}nyy`#:yEr!(vvۆ|"LQ";n:n) kѰV+FI 5zh6P()&V %z$dl([;k8Dz-c4+'v/N8ҫS:'8]]Ha_L^,`"&6ܑ lD ^6CĚu#RopuzF8CCxOǩvx_0I մc-7(M$\R^ME1{czk2uB}SP)P'Π#"+<ΩpPQE>a]ioG+?M zm k Ɯ&ɐlECr3YfzTX+H$S)8i3l˼v4[ :ٱZѹQ_Ozt/.5\QS8`?\3۹]RAzWJ͐P4RȐiƂD,"$T"Q$i R!sJᑜ8uXjA4CP%!<cǁV,ҀPxؑ-˧FfĜ3~q2jjNfE ek+ RtYj ` _sNc^RukwhNMt:ꜜ`"۠mQR +{A1F;:s$/^MSka| o΀-ʺef?%RM M@" 0AXES[ *E& ( Ƙ\gҖ~,^Z=Y|~:1E10%QReb KP#R&Q,(DLhDkHq*d6rUm0v;ƱiavcaHr sac6 7@9mc:['1W Bn)z4{=;q݉q(k"x=ػz qVDڻ#P$<mfa\$*L8J6BIڱYK b5Bb\jrsM=rV#dN]cv6b\HqdW6)VRL~KTgpU0s8: Z͎aP  Tʘ@jO=B,\GͨBG!mtGJ;vvTkXl K9o[˥KwtՀUeEB6(K(p^d* |ȱ >4;WR:ލ!J偋}Gv4wjVjѻCCs-SMc7Xjֆ@aZJLL0$L0ORHZwc~,qW.*ޙ@״=>3e(sfn25߭5ǧ70 ڧ\s>H)2DE ^dL3R0+&TW$M?;N`d@ޟV"*V{K1OUKQ)УNj-&GCmSrCX=..-ePaÄ8 [ccNp*u$ 8#ias Lcq۽vG uA'Ve#b=ZOLxbF"dL& XXDs&PIDG Z5΂gBT$B'T {;$")RRuȜZ6VZHv34%[bX \{"-jqB&(Oj>SݍD [tr/D l2⩺'C2E 2YO`'ܭ|! ~TbHZcv &f|0(7GGTn/hI y@ zOI  K4bmIJpr+RJYa^AlOZc.Ԅ)7` owiDӏKl^V9VCള,aL l|կ( 8+Rr7{v@8*wFiqDQ Bi!1*% 6E!& 0"S qVTclB_F%bʀt pŃJ)I 0kT€Hk6TA;.y=O%Hs7`n r9J8qS!Q6&%K Unx.7L!70dO@SRSMܜy(/c&E;)6=ꔍt Iƨc{2'\K ֎1)L| jMŭ=ړQz+vOy.FdÈ]SRSaQ4`'$3u$֑T3MJS1׶x.Sa,6ل$z+4v/r^8+UʛJmBlN*U(n=kLŭ2uFE,134>tnQ)` _QlMzT CR2x$ώD"u+4XϳTp!j 39G{j،5?5 n@/dtnp;.gE0RG7fko* юہG}B/S*vć!n!&34C}eٱw(8$N>®OZ?Eeƨ~V~p"X+ 1gdݶg<<$'Mzgl٧h=MeE~Z fjR5yGۃ "\szKae=rTRgnNSjr Q]TS a?l!˻+4|cn` Q8thja;tfڔki++u1 fkBMO^E}mwmtvϖ:qzA37}3Ν̳U29/1ÏvfvFVll&<-3VgƣbOσFL"K<3?Hڂ424IU3b`KZc 71w7}& "܅ɗQdn\ 4uҸ ўdZThϮ7(,j6xx!ڦ :&=n Q;CTc$a1 ZئuX|6_'aopBhy:8Պ'WI}adp T 74 S1 _gk ?Ʒh~o/>zzkt .Bnw]⮊aM\0r!ڍî^ ׬caavNp<n <8<ݿdMK~Ì󋷧g/Q??g|mM2t濽M7'ۗt"ٸ. s[ *͢roLqo4kәM0vNu/\*y*_GGb;m5^")r!'Sd`F͇fCc[ :^Q%P^s it2;[$b| *(<'˧b8qX{_R;f6ڀeb9c?ˏ;yj7o^xTKG7p@d2q{t͋djx܄٧o<[|.kA\, sLFT-vY0 ^nT׿s75Gb JNށz-?Jc.b L٤p?/^p6 uA/ëLO$O,~=4:_k_SC' $ Jf/.S&6PR#LQCQSbzmynV(:q lm8A;t="CE֡SxZ'"=(D}J7N #8:e* Eۡ3l3]9.,vAwϡ3ЙY=!Q8tTcJ}:w H|KJ:w\0l$|..;8tsqBC~֩Ɣ C9JGyEG:t \t=Ts7n[csH0Zv脸ɛ sCԡ(3Pzm7NL: F¡Qj!iZze:@RJ}zӿV9 I47 훴;sG=剀W$˼v# W ^ vUKV4o؎ JAiI`3ݛgBBdl:D`{m97>.Wɸat:5n1{.){jev>D$ĺ Z:в$)iwdBhy{V/$΁kV..c˷D\`qo&sXxVӠ?h&i4S8O "HWEVS^W~)'V*2X0E10R“-y>OF^OC<Q9xp=<˰aLLdqr8`GHwbhNRpO1zW\q q6aHCJ $0 h$"8e8VRI+xo0yC&hc嘛DnP⪭~?N%G%( I Z0M8H00;%* \G,<,ՃU@\7qC JJz2"ab)TC-5Ʉ@i/ I(u62FLQ52 1G^c-Dj5g6;B,PG0gGDmaw,!ۓgm,3ZHr&`+RKl}Ht8VMAU*NkJ#!CJ8()+P{n9ksQr4uf0x3Jz雤r Q1mS[ oY\\>uuB[L+u>}NϞ?~xj8D+S( O@|&)nOakM#WP4Hy18*N[Nzf=9jA1TSg7J}0B_%`'سd +MafArNT!\ ̘%ݰ9#Kxla?9Jݑ0)3XYb;oa4qEc؇6>lˀmP /Z "yX5XP6hH:$wtQ +U-=Z?{]dYuk|1wc[z'Ǧ fMgǦ`k:v9A4)1tBܩ~Luc_3qG}c&lgf3tӑfKl;,3XV44 {aނ-i(ؒ/;tnN@9hxҴ7]zˋݛ#53yJxV{١/o LfӗcU0kcU> I qf\E$-gvG?{Hf}-D!*ںə6 |3f {+#խ[ձCW/a|GǍsi%P*Bho֕sq=/qtNb9)[_yڮr.PXaz\kte4_ɥSƥ|ť-PDKn*u"1H7<93/]MĉoKZ7 85v%`oۊiNK$72w*gKHrs)sn1t|RXg.'j6-DP^oRT⼚u┯ڔF5V}e<Qy~8ƞʤ 9>0#~}?9r\e|9Ҭ6]j܊Z_kG9Pb8tRM~ƒob^WժPP&AM1D BیJV9l  0JXJH7p-;P]U:rP].Wƭ.CCgCj5_2$ u :w\ܬEse thz A@FjH-XklfSkmUh.yc"^ :)Ǡig4t۵;2+P$h}:v7Vֆ.kzu͟ӬN* u˓H@'m{1Y] ԑߔZ^MK-A˓ t:IƥKN4_9lij-T ;viT2G ^txOV[sc=Q.vڲ _iV #RggX{g~/Ǡ@I;@&P2MDs8 LW18_@#ɅA IEXdLL]^,S)(DgE0O>D7 6PxΒ_tW,JrLoӪ:%Y39pbJmTrL^!ZmBZN6Z7)[X6i$݄- OY%T{$̞zc*]ZЂUKxU&2+`FuՇJ+cI[Q7"*#tuYIݑ`E0J3+X弎.BtM^ǛD)^vBD, Q{ "0ęԎo6=F U PS#DE^S$LE dAӘ砭"U =Uz<})xJ.}HåCJmfRGS$_&Rj0>9"&"?O&YʦdcQoþNv-RJ^? 6S}vcpZJ5]ot/;qSB 'H= !S$-Ů^JQ,U a5)Rv[wLI¬~ Gh2 ӆnq5fAE4lj?6gƱCͱYN N_IO>mN^A;11 ؗzSGv\Hy4 r–fS 9[@G=jpB%Huġ4[lDՙJlcMR׌eBv)s-Ƅԙ[S8KyK3[RL/QEPckQ$4U.{D 1a6ǫPkYj91$mm9Jf$Eur!abbjڢY8S$ I ;S<~2,px B"|,<}OPr$&*;Zc8IX:'<-d,gigXsGR?WȔR[F\&xVmԹ"2) ߖ0j8LRّvՙ:9_˷vr%a(:EFVCߨ._s+2!E$sJ aad%WVp+UpgD^LE)#$%wN{~?jN]XM>zLr.\smB&V r1Ci)87#ey8MOD @<"43>ŎxRWpHu&7l΄(%'n` ' h4r *E.krR(VnKDO"[,k*R),cgj^;cԲl+U\Lr2՚L\3 X'AJ Gs1k/|JzC>7xMaJx4,z^jjH=F {~&st˳aY;ts[ i^ :_CHD?tLb?d0cE}oLCD9A0? %ѱ&> QH1wKYJ*SP_yRs?i*|yv`p(.PVaiIrzS/T OE,f"~!^K[˙bH*7{k [QO45]XbɳtKbJm.,8k;zdIbbA pЛF&hg2>]BM߾,k [qvR9ۋt:ZILQ2ABYM}כZ'oX P,lzABj3q֞nʙFr+D[ʹH,u{ʫbJۃZϪşpnqbφ1Qa!qi^eKfҪۈ&=G.ZQBA8rEQsEư%4+KYov^DZDLr eHvB3:WS#q8C=/qk.P"H)/ԑ K3/ RGԣ#0EG,pH-;sRk9a"tIy|IPF4}e2 y'LN2]a/sB4UBlsI9PZ%a̷W=q7NA<"͞-.AB 8-keΆ![@C>: NIL;Lɲ?bge#m[yI< Jؤaq2y|> O{_?7?ۼa]twcu\.>RGp ͺB{zy݇Y_Gɪv~~o/zlEojheK=?{!s'DXi=fLza3~/̄[wH_.d>d80. 2.à~$';~dl^6)v'@bˍYU,>~rݿ~@Ⱦ-SŲsX>&ťu*֚߿IW?P4.֍=Ydk7?TJ7[G.Sm o./o~{ M|џ7wVhYioo~[ԇagjW퇊ѧzͼٿj8+˅~~?߫Wn^>~X2v٣JnvE\X>//8k?P ꟗ`_+'JVT&uOk|xidu6vLm{y7"Ζi&?'CiZL)c&j -s .?O15B_2!r Zؼ/%MY{%8ee@n axUW\Cm`3~UOy|@mIQz7Eޭӝ+q0Q4WCZ#ݯw\{T "‡3=(m]kUS d[B:džJxN\ b&U԰ʤsLkRD2 82Htvk]ob& .7=Sz&E3j3\B&P4#82LHso8߂# sL((57H0DQa(I4" R뼲Tybw#C(dK|R:g BSRކxiYp{* 6s7†L*ܹu ߧf,qEԣzmgTPAt +_G)޽/h́=˓:p6ɤNʂ" ޯ|OGObn>_,/ғW,.Y͍u_wۻ eźy6YµUտ3AM]\O"ksYl{vaLĠ23_V6֕MI VZ;g'q@2ɟ"sőE1eY02<#8eS-Djyֱٸ>A1֒>dr ࠼uU( 奜=5ųЙ~`&^ 91,|h讗C7^(9T]z+v->jdt9 `}H,) Ԏw|c>壜gi%S&V,IrIt(n u58f@F% L6,535{ a kg.Wb[tM ^"0TLC3U]xFE|Z"qM$."qTpARg%vFns ck=%Y1{9f܌63 #pvds% 0o߾{J !t"xUGWEeiik?-G?q=l4QM{ζRՂ),@ Tw˴ z\L$i6*C}\dl5X [mh='')U& Q@懟 Ĉh/w?AYckbzl"Gl+.`O~1ުÇ:~3~ot|oN\yHk@! E]k׽wkMy0i Q-$sB[[ut$F05e"6tӀ%P8S8b&$=Ws$Pw9`C(ۿkK㕞uE9NS>tK\[o)vFzp]`I"=]},ONިqyVU()-ODsh=ZP"W3hn/? FY=6Jq8rCgH(rȐP kaZmSCee?%l& εM%l5ʩ hNwmp(%Ib#SzݭRzF8! [3Q/?t#@=N,w)% پ-ON< RЕuKf=ѣږ4hQ ~ӨHI7?X@%g o"۝Q뚬mSFXb'jJ?ܪeV^fRlt>fEfn. ya?]2?G瓳wV+ ~zz^ߘESӭA{a}m6Z]T~ʒM3>`ؾF,k 4d!DlJ nZbec:Nk)[|w(Oքr)!qz7E[ |L';B[̻nMX+7-BOp6v5?vy~N-TS7ߠÍ*JF67SsXƥR ӍRk|!Z譒鵏ן.N0!n>]%VWe <qE\, 'XHG"%9\i>]5Zo>bqf6](TQfբhZ:}e@Rjg*)Q {TJjW{X'SU SP xM{QQ B*iq})H{Zo_Zher҃IF&Awز`d1Pӱsp(U]7Ϯ/X@T2by޾,ML Iu&9MX5Otk޶A7,RОKgZ)RgK5>_ݘN?׷D4šu,~A):=aPLJA};]G툼-RIr"Rvݼpc> AqEX )(j)%9J FDbӼ&LE;'LS7rK[HB+u(J+Dg #x9s,Q&s4͛ܲ"NDL]ZL8r#QqyL};M M0YhظyvE|5_Z5':cYH%4xd(4JjOC XNt>])zRr&.庿ɪ N^}ɪ>uo2 챺<vQ3kH5=VKu+Nw^箬KBGbuFD gnW hzopR*($IB]tX=s vK$!a< .X1#I!cBh!'kؕ)?~w*En"C>6@jJYwaۨV$^wF-(&ޔӝkq0Ѩ8Ixܝ["<:OUq@l0Uh{x(z2uc2^P/P{ ~Ȧ+ia"MtDIj[O󞓓!UqB.=jFZM k6è cMG@   y kq,}AK/;B%P4H"3vɓ] y&cS;MbG+Ӊ_$gukة[Qkڇ.(禸[@v6M8)궮{)ucY,430KGPG4)NH@[y5rYI.T s0kvԂWM-vXB{;蹺\snƐ7QaT1j/qa3)Oݔ|Mc O25.Ζ_ERQ>.6`)榰 j0m}?~Jq`:8Ow:aօ`ݎY8o|޸һpxr=:}_wB=-[笕ŨnɸE}ՊX'--cElJZ0'~d~ƠkwS{eËyy?<[I}ڜ3ӛn{4u;C$wuv"wrݸZXf{˛n6i(=ݒ%wQs6 N'iaxqc{5?ݡb;Y&;=\ ~8{iFsRdW/ߞ尾ʜc m6 Bx+47g'pcn_NlfoǽMψ0a{U ``3p?O_pA{a&3 L3^NۿΝL r!zͬOv}c_zXW B3οǹxusy3-H&]F\Tâˏ`|cd[wv?j0 bCAl h!~Qb4\{x}lF]{XskFѰ4tMGq,6rGWz xEzL׺7*VBq:w瀽}{a j /!0!ɂ|cqWɛ>jR_[4J_(e/@"Wp&}1DKwHz%rOV1n6G?N$J9V.FEů7 qSZ8 ^=$=ڃܳ9ϪUa{pOڸڊW%7j.! DeI*s¤9->oW9D!rq [&*%5%z ix.c P6; F׵;a-0pz~~k36nou ?g =Y E@~>KY>@{ޱ(Wm=u2w[++i$!X9i*JD'DPi6aL#VGD#ŌF1=KUQqty+@J?,OV[I+]BoOrijK瘆Sl+ m>|j}[l@[8|`Le- 37Y VgspFvúd6fVk^hNQ)0-4XY][mFv*=uaf /l^-VF' iv_Kx/?C<@]%.QwKوSn-I 5XVUQb7%vcٺ>,ƒ[VC}NC[d^S۸2xW6:tUʾؘzIw66k?칄vȴv`6zd at'u`bbb 6&ذm,00&mp!wM2q@v^ _i D=fTq[ۈÞ217/'9l #*T=Q!&Y,gڳ^~z)$3ݛ۳^L]нxqm;wE:t/I3;tf:t?L۴D2i 3qNaڲDuQ:; @)f/fdQfهvf|s%A% )[^wd_4Ⱦ6Kb@ijt#P\.uidC}rjFdޜvl4C=ŏھ'D*xe~ C3 2ObcB: sƲQJVRg\j5 $ 8(SAA>k)Anٜq=N7J{=B9x0.i<ױ?FƖ;3.y\* .[> x0]ӝmB屎ޝQ-LdgyYaL(h̤12#S" @ u&wܴ8,E5\j[!Ͱw3KE)$E"|juu5rD)P"Q8BVV atUfDql!LoOM> !Q<,>@y'^ܻË;rsrB:S(fN\a_9eY'Q-W$Vâ`o* Fn={>qg^<@豃$W81t0[ӿ=z{$2k&NYD@䣧Z,=vbAG%4;ljKz,`iJVMJ|q$ߐrAPXϯG.*,(vhx!hPBfOHWH]K[mF 5N15aA"cx"( 5 (JLXs}ٶK@5ևdkpX~mil~-C7($,ZD*1&dR0᱌ Ib$Od{7$cCO%P( ,F 1& 0NWLd`{q)`i,y6G6#WbVe m+chYCUuK{nC> W\c%8Ԥ)0$&G0 eƙCBX\f/;K;䰫5fΙG8[XɣRr&{De}t F6A_\{rer;SL25v? Pݫu@^6&ҀIێQ0*|?qZ8@uy|! ϶Ͱ\aT|ʞ٧K”e^UIxflt`nl|tc9HvO"#sN:6fmÈBI{1X&c^B#|}(ck'1JpvaC8gw)8bE8?[?7hp%~GH>A#n鏁z4a̰J?_9t!كN{f :I@+Cm]LEV^c8(ْ]k2Zl3_ZN?b/M=>y}R޿VUo"$QD:MH? & > ]XDPA1F "F%@QDqͅ Cyē(E)o+cVTnȟ R5Y%Ã}v|k5|GҨkwz^E4 Uh$.{ޱ(1t}UU½%g#~*/5W?T":Ǫ/G-lךeun͒:,QK\(Tj_9g-yJ;a/}ǽ惻KckҀ|8 aXD@F}DF,$QH(!1>aJ␬YTe.f/v"vs5knxDI#LH%sI2as)l#X1e DV:@4(GYVVtY PpU$l}Qpy[]qn9䊥XDHQP `=(`M51;U#yV`D6ŦY5Z2wkyĸfc@iw 3ۣ#?zậjp5afq`[F_צ~Fw6uߤRθl@SploɬЌƠ+tFsZ͡d~[HmJC~p-S\uGٲ;JuJc)d54[ىzk5p΢xL~W=λڍ~JV^SKi'\E`N[C~p]YOP ~m"N):ycK˰B쟑[5ߞvफ़% VG^fA՞GZe9Am_/6G{WӎW[Mm瘋?Y'ĦbV8ިNjb'̓\JqM/EoXz*kyij* D-XJT˜%, h:xXR9%:28FbF MB(n`uV1+v=Zi! `lo$gmjX䀙\SvP Td˥į~{F–13hS,ɣ~  1 %`e+h2s VYX(G(DőϵbA+57[I)Z 9c7kT*U>sd"&L )C?,>fUV̱ 3A*U*m*߭IʚdQYRfqmt?2}*wlc3_)+p&XKWBc LusHӉVwuRo3 gPcD+n tmDI%&ЉHdU1ޕ t7Ͽy(Bn@,S%+9Ș&SoΉ#mLV})7ΌB3Dj\ ]. t#P]wOzvOW IÇyjL t!5}΄ QݯI+GX ~%W&N-s6\ g>,X: *6teJR& > Ϩ7)Љf>z@w:xe kq-,*:Cn7U, W}B¯\.ܑ6ơԯ*:njltCMw\/Dohep]בK},t䓪 :Y:x' x׸Ts6js%}Piu?X4ť&"V4bD>ytnNA%u / Z: Fi79S﹃y6 &gJz8KcOV41%N?v.yȴfEN~xtmRλ7^; \x%e{+ϖMRjͻZɽp~N?ksn^η{h0"jn[ ʁmތ%c6O*6'\6,#ipYV7rwcug;}+zKI i0ui3 GnfH0w폚S8l?:X4/! i*^ٿ8n/},:Ͼr40 cn&Ak->Fa=U:ZT(THڰ`GA82iz( W؞.K=>Qjpe ;KMqǏOԭ1y8` }r 2Ob ^`ԥsЊm>]~;7m>֟{`#wV%h3U:5rUnI ¼=Z⧻$Su?/?83짟npvvF_/zA6QxYUv}DG܂Rq2~ Zm;ݏ,K;qˠLx%i^׾_ 6_t"= Vy;vasڈMoμ{^U#WUv?g'7z8ŏCcWE IX`M, {Vf?}bzO+9ɯqT39 Q|k" `9Ӓ`+olo(VxئYl8x/Z]0\?VаYID[ܯЉUSp7),n̈́uFM.DOqRK>kn/fOIq4_0 ^dpҴBp 2/<'! a\Ccstk3ޏ;'+⇻:g~GUw){毻Üen ޭqXt}}Vg+cRd~73||zHcf ']Q5ՕHf 5>YOQoTtokCmlTb'Km/^_ߤast1*6P3uY䴑E{rJ`0Ho:W)<$I FܕR(~HRfsRPi?R~,a m*9TNz{Ds<òvwAЉ,ĂďQufd޳/^(g'@/z>^_W+vOu`0d`ic\#xyWO|w[$ęW4Psv鏽c!pkDC{[,~H4d-|lq||\;5/t VL#H@K%;O8gm1=G|۫c7=&>+4O(DU2+~%DY.%*R52F}FwWjv*` LЂuQjxz9`F/z!5`Жlvf7O,<+ɼ/0jz3[:,K@3ڠQ7=G6 G6 G6 G6A Zh!i'I2`%ƤPA¨/(  FZE:>XydaVuE{Iډ܂^= <;,8k1,?CNoԅc4.2)j>+sZd/P"_/YvGw{NNjop <=>pBs )o38$8$ YFVQ|]cpB~t ӧT*JBP P<$<"ց T-I!_( BJ=ÓnPBFΰiZҨ!ǐ#FmShjCPcܚ%XĢCX0 % UDSU DH$dRB 0RKsGGHUxa<1tIMfU)WY l]WYK_ߎ &h&{w7edLjrJIO4 '5)N,Aq1G111epHWӅcY.G u'm?Kyړnxy4^UE8GK=,`Lc2cgv"@rPt ? v9=.39<;ƾd^_L}B \@=1B>O"ZCMQaXfb3t|EOb %v7(R膱U/MrrJԊUXw;s. ƙLiIaϳM`PgF^ /vr\\Ou%L7qM 0* 6p\5h/*燂e@}ЃQk-Q_T {#0{ t`0lGgb"xxk=p¦UV?֙|Ӯ$钥K̷&K^6m0Q,5-',Hr(V D) `!! M"dxHXDbKnj;g}Q,/17+OH&qn6vѩ.>-Z& '`쐛 AHJ*YI:1ej&Qma4 @cY^|FFr'1mCG'DBr79i:b٫ 7dB%K&eC_*sͥ۵$B?$P qLBi e-)EI:1[UD2 H>`bX k0C',\;IӲplFHGv,՘:U4$*X©TL#ڗ! 3%Uc,%eHt]dO v^F5AXFssB.P0r An&sFrK(LJDD(|PvOIOD,O6 S:2AR]^>=;?[~UŘEƀqf_3[1mټ˶qS1alY#bs0 m=Le_knBHI94(g;-VlZbP[8:[6j`GH  h>wSX]ҙ $ u?{UOGy&(Ǭ5]3kektvxoFxƶGWZpZdyr^;EXY*E%e}̥Y@ (ntc+=F=e]^M^Ϧ@A FFC`&gܣ}Y/̽vee @Bk.U~gv!GbWwfzVZ>u.$ żˡam>UK|9Ԃ7S)~u(VDFn. <5sڔ3+f;֞c^w豖 (Hm{}~oN( Q3V JGQ` "uG <|mbQgNqjvPfBu3x{ lps4}ȉRLרCֈ6IVf+Ǵ5ݽG,o0,aYɒ8䕳h)*ި[@Jmt+)EiM%Akn 7;Jep+gR<[Fneq:e߱t.hZXn;8Jep+g=t52_zW( ;t`g^m<9 X0 G{a+ W\'OasG!i`+G#9Lc*9ȭqL;#&[DQ5{1#Q⾂92L֖i<|3R,Ed#3k9pN^^})V9 0ue*secE&U%(1~鴺A|ڠ{mՉw^>hnufWEn??g5`Ö1԰a1WJ&&-h8)CU8r5W7ں=-cl ?{Wƍ KD *}ʎ,yS.՜&后!9<$a8iqFh<}gԅ-䎢elPd.(tb#,GnJBJ]0NJ?QlAe **7Ĕ+y@M3>ćֹnIǏa: &Jד'G'G7T9tbmm}vJ.>7ȯ/[Ez{^\\0k$ǓgH~B| ۾MÑ>.k[շZoZ.e!PQ2Y|cH`X[H "FV| Ql]^E˾|M?,A1H)>.Gm|e9J.&Ye{q=w~"4~iE+ uujK5E`U'H;BnCnee`:eQĻq֣01Vޭ Utg"maW+Ii!Ė̑X3G"GY''vp!K gcDĒyApjjځ\=JV[ޡ 'vQ(}=`GI詑S7QEq  ]D o:YפV׬k.b `h*J(ԙJc67+LSͦ$ T38-xi*M&tҘ`K7JBxpL1SWxD.T\U\)S!]`t)3LXNI:uĀ9S=#¸3:MŮ+1u;SiTRvيGzMSK8tOLWiTeFtF#sRALT3y$IBLNS q+`M }r~- c'̋T4BܠG}0?IӶe`Ld;?󏔯3YɝRNOd,E;Uk P~8l/X .؄QwX*aX;w (UvA kJ7Y2%:ƱV |/G'GYVwL 1s*op_b7L,ih50d5eQPu ^zm쉀# ڃ眶Ϩ@-*1'R GĐ7dvbӛQk~QD sW!l`'M袐0bT\̙PU0UׁtS 6#Rs@4T+L m*:X`A`f=7W[Z)nR"_OOY G! Y8,1^:g! 0V"@Jad]OJ+r{x2rku`(%_kokTu?i4iw &6}Ơۺ+Veړ1$jflzz&t=9wKwӞ& ޠ5 ƭ; SPvf,|S)Gq%LQ>q!t/toqe^h.j;6;[5]4 @!s4;c!D6Q$5Q$ |HC I( ӫо)xEQ\jgikJϕ2(-⯥+fYcllq )lJufAl6&?vS3E`s%vuIT겺ٶ$_ڌJJBBufDMP(YIu_<<#QS*#+ոZ}SgjS;W lT ,* `uA [QґuУzW `l$U f ÆB>QvیHMĴ-# O#4R.A0J +_mQѢe@J*2P|_r@=ÌE>by(*>&>%e5X$rjұvptu@MG3p,?Sֻ(VWB:7.u]֏D%FyůQ+ï]e7j3)Ch8 )\^9Q8H("ZOCFI>B4y2&POidPWIC?VZ7 ښnKZa멐їq_i݌Mv~Vysky>ޅQ04V c9Ǵ'85xjЕSQ~3eZthLo~$=6[y׏Gwa? dv!9Y?`b*(hY7Wߙp<|&wyS6>>LߵUxF~p}6RMHw@?5t kνg<B?Ryvq._}P=?؍^_/cݦ$lxWo^m4|7M?Pҗ~jPH`0kI Ryڛh6?_I20 ? G%+]_[hzrD盛?7 H-إ>fzY~yUIϟ@`LGٌ2=ڜ .F̐iؖ:u󉢁Y.#4 +VoU+CkX[jPijQu*ʖư;^'V sqL9Sh? wX%0ߜ a ֶU5$-)s#6iTBpR ܹ:evpP}@zp,WjSKtՈ3b3MEX2p0貚3;gR>3@3:MŎYY3;6b? KjbS2tF8# hcSp]CB5V?Rd 9{:4wr:hu͆:x:Ւ3tGB,r 4Rkuen)܂S[\SFX(/noze؜_V%IPPL~4)ق`n UM,;Cٽ繛VZիdf̲8܈ЭnMIhMrOֿ>tFOƪar PW@$Ta17>rom` VRE D vgR+' Ǥ9XOg bJZA;^\P@+] V~}3[ h.g%A@ 63S3EGۋ5;=SE\rxTc 8TqpD9Lz?8I}$"Mҹe{y NFxUS =LF0H@ҏC&03g(^scq?=_XRzVdii~-#C7) NDHp U+1 i D> p!FaӁ.=]a#/!*s!!o@DUKA*& cGJ~ADPHao`.tp@0TFr B2Qı=5Js+*FqD"\l!ǜ (c/$$TF$(#F'B+Nz<5 o5{ЧybK_EݭpΟv-#NI3)-^L1|pT*{W㶱bfr>yAaGZӑĻ}IiDQԨ)t!UU]Uoh\m$:wkU@yh{ۛ}1Ɓ\HXP"%?z0 B"vt CyU ాx"@~ːWDz<薻U SJ9QB9.D[#^Jz%vbs(M氽8; L@T%8L)8sZ g20%xo"E\*2S*RAQ`bؿbN@q9".| ETOIZ 2a&?Yh 7|[xV<;5~3m9TژP;6dX <{s !椃zɅI>{?{B'hp{nGp ǤC8.{/lj=S*?BY0_ԖZ¦2Q:|cF:B3'Hػƒ6'U!`D=9!5Q'0וtrʃr|j䗙ܺcA+;+.a4+-@ײXMrao\39Nˏ'jO~zy>C,#qBwoO3_LaV 7,￝8Wj QZnx?Ne:x |0J[伛P{еDRGE{w>+'J8b$' Q=N|NJMo,̑,@]|wXwk#D!eDH2dhRBڐ4FT(J!6`Q(a.j%8_H Sͪro7yqT+(9SBiUnx""1AdM^`l p<@|>eHI~0;&5ǵxU*1bNe$Rq#R?.:BĨGo1mS{.7}WVbz`A>3:t[^9O]j`Qrb^6=':gʚšvx< fx$lmٙȷmwwؾjc{qoVDnZtc2]!Ϟ]  llWz"Pif|>>1:H `1I[h@u`#ZPCEonQ|juF]»M)I?{kF30o67P19 +Vy#{!͗(ovWS<8eg* {,@ bauOtqxe+Rn{OkGk@/mZ<~ VSُ3v(ʮ#-,h\uۋ)YT[ ȾͫsOt\MF[?LY2B%~Tzb՜=S03se3&C/*qFwUc&)/]S^M=7?2dT(D'*y964b=fVi6z<֜.nǣvitZQo)~:sBLm5 g&`dLAbPoO` ۟PLm]Zb[-rʭKG$4`t:M)o ~t9S)Npɭe{ms,9˶lxj^`V|D/6n^u *{Od))6c^Z]ʔʟ~4K>5a2k&f[eOfgE&I*)C)_R0q⃔SbߘzѤA27}$ ( %݄nOO.#L05 w_Snqׁqe A֔wg*otԾ%AP(TH3hJ7M̮+z.5 .y},oߙz=]uO9-=%jIK+u(vgsN+w)VJNDE_ÜVR$UO~B.jjj3A{.# L' k292Y"EP(ͷ%۵Y wYo"R<LvHjKVlgQ1p#+b{LfG۶gSMa-cO5Qoه 練 rׅ<[-h =y6T"뵮2՜j"ljls=Nnt֘6YxG6A7v}:/䍅 -WWcSAp̱®za2A/K_g󰡜M`VQ-iWt]>nknMy:MQǺ].27Ѻ!_6) k8(nMy:MQǺ]r!YH֭ Ulb`[n p;u~5e#XI,FHs|*1T^f<'M6D@ O6ɆbX7X8ulY+Hq08c2alߦ,lrGi)P)qPe$-37k#wYgݤoʏn8a\Ǡj&106Ǡe$IDؤ* I"(G12vSֆ?]Lmx}.dx&~lc|'!g-]f4' 3vu,U7-  {4m I;BigL sQ2 M^@uk<{X^bgsIϯZZ-k9lQϟP0ěG*K%4_- \=o8pGQy5҇KHH4LB)N_- 0.g0Ͻ S-2‚ͫU;ʀ"\p#`X H"lg\6ԊcUlh2@~Vȼ*>v"fȠR2"FR*RMu(4\&!d҄xK|> V'DlEf LN:!B Y %EWQ'D z ϻNHGyg0HPhеeDȗ+$5Q÷4UZ9\7rV *Bv/˞9#q(eB±1v'_'.jpp TRaYhrSd獃+@g_n8 |юӁ.ߞ%3%C|ȉ2vDs|"KF]SLjq6mQPt梚#WZ+ED{T:~RX9(}kd.a؄Szgښ۸_Q!4_X8O}6ʺX)&)ي*}35%g,sF׍0(羾 k5Y!]qnghsIr׃9I]֦ԂHCm$)>sHC ʤ2]fȥZnu0nSL[~SzBX-&UϙTnoF5QTUZZfyŔz(guWw}l $jk3f})ӏ'T$ ݘK>ZlwB7_R+̄VVDG4%\>PZ_U[ ڴIZ䎑IYjFXE Z*碊a,Q#<.qTBV!=:b&q3 aHFbZ<[|ЎN nqXYqM6OTUI͡5B< ѐ5ʪ4}wYYM^r}uaj4G˷d6EF]ɌX6Q볮JT6NUN8~db,M]nr$ WcۃW1AM4l3wC;Z=rSj7k͋f1&I~2PVm얢Yqf4!=ÌdgPUzEL^OxX};xcOnCNva4B!C(<ba)UURk%=f ~ /˜eIQWj+C-g/ߪl(坲Qy"T呆," \x+3wYeKTm>Qp`KN??(@H'ގʫ'Ix{zlO]a'.:|:>r_ɋ_dz/ǯ9,8r 0  f@H94n .OI ?L :]px< >TAo<^/??/Kͫ7O,/?ןWկWܛ_>מ繮-$2\W;V(j% 9|ƆEo1; u|443W^^9LI(.>4!xAЧW㳢8®S헨X|r]cwtm*;J~qa¥Xn ^4-Gpf+"J/hb^n;/n^B^:|'P2x_J-?.sH}{.l֨YN/ɘ -GӋP_޼y,.ByuѳW/,|Guya|;-N×q6geݿd?9lO7qv: &|KUW%YV>mydlϋ5p5pr2製M5\I ½o|b-}_?<oQ\/gs'@Jx|8>bKR!{u6>dR5•?gp2 ǣO`PE|ѳlZBRTˣ߆08B}YTNgY_ dGX~1qy_`JUuieLUdtN\߮Ou|?^t-`^FK3X}%ehfU6=;d~2'[ Vo̵#MxMy+ۂgkwbb{Ϯlr K !IDK6)"մabb$Pԧ.s?!||AgW?GQ8 2TXŋ79r])L]P`C(噕{C%'%ȟ[y:GV>D"D͕C8?ү(j}gш*"AFUE&''>O뮇(u?H&͂@e4D|ZcB{Lp^:ۋfJD2aLzmSJypjm*tN QYja0+GEuE^bJc(f_ELr, .&cV贇|_9J^wСEݏ)e:m=M/+t(/R& q+tPr{~ Q+^?BBԹ%2kbAx9~rRͮ_I Α CbdT11Dkʊ^;8Ҭvps;:+`K4Tf4YL^FMv'm[37v\fH%BZ*>Ҿuԃ HS uR Q$*WuINp*/}b$m^ y'42PE֋ ʎ&U2XKw-D&rt4ıo,#1w f8$8&E:XFc̡ĸ$<nRqD[~D!!wA@_3!/%yIk)-NFvlq?:ϥDcAX`FV<0o+7( ^9@IxA\#&@ME3.*9T4(֤P_-/)\#$6XE'PG"!hC^GC n% A X;^Q9M ػY=Ӹ~N9S 7<90G"WNUQ{C[*up5@$}O;< (ANZlBP :E);7:+cU>?5X$HO'}J<{)h-Sg.n-D5]C̫wEDq1y:9=nC0,5Bw #Bi}s'N/Flt2BE (BfRpɨ`[E&᎕8-XJ@HD~R1x&L&p#j!WO󚃓avzsxFHF:.Ic 4I9%pБ$b$uJC{yjG& 5hs M8Ij 9 (F ;L6^igWڢel-ۑֻGQIZh=ltXBn=0ڃ 'Bg;k=4&bchWД$ɨAe{NG5!0#F,54`06/w@#`i )@zFiQ1W|cʡ E"N"9cdpj'=P@H[OYùDTX)0$,t' Q\) N%κŠUȩ&5o? eP֐7:UwG;a~@EbSW4DžDL&YhĨQbfQ<"h!4s1K9"5z8|ˏ;\kb8q(:EkZAψ&jSz-.1˴Q& ~u3@!Qt )7l-3-lWqw.>Q)$ yk@XGh~"s4ǫR!36|SEINV@GVj@9<{ob 2|'D].[CDs~ڢu]Eǘvn o@Vct@H3 x%yyM>-[=PwPR]OƁWpvB})e*rn1 Tk\wdQ03F L{&tlN`ݸfci1S)nxY 9oR8>L]IQ?K Z)uUG] M*?uSC/~cX{x1,pCrT^d(zڸ9hC3֝5c]oдg4XM.ZCcO%H$ YUѣ*e k)^p_!ݴϿNj@A,]O'3FN^xY(digǂ Z"s@2F7݇2׀0 \` kÖO ^Tkb`{ܜbP{Y'$L WM.{0d=xW=ǘfrlB8`_ ۦot`Xm!^]5gLW9]}R `ftR?P5d(zLaˡYY.Tktؓ9 Í*fж7]z.~ZOmGRsxF5IFۜb2/,8j2JY5w 6&~[Us#^Y&u5U~.,!kFe*9Wzi\($WH$NOZdɑV+DD5p]g+v.a>\N:7W~+xP5܊k̚e ^3@FbY86v0 ͟B~{BA&h}QEW}Z_ӸFyc ĭM+ ?CHXB[rT (5U O? Z#׏~pU \m+~'8v4HhV?}l it~tg<Mđ[ooNlxq,(ׇ0@4]vgW)LӰ?)n<EQ(az鏣3[o(4Qm$Fz5m 7jw3yrFZd H*t~˜:M/-aN뼤7y_pkIhSJ㻡B(G3O/dzyu{ʳTr8S T%JE9y&ZpRhvD Q+]uǯ1MPjKyz]#!}@]śONsѫzO 9>"=:E7F7B1/N[J.hcM4[p:c]&DDvVhr<ȭS>{8MK!((}ʞxsrqOw7fhbn{V+R~vi'%ep3~vz#[-.̨za[l˓#8E=⿣3r ˾djtߥP҆VWI*ɹ0Ҩ{ӥO -У`_&>>yHEUt =O\œw?~5v=k4w/)K"s? Q륒U#Y >} +QB2Mh >mz.Z2$"F 4rQsmsimћyf5\ɔ &~>ޟ yw4^שKMz ʲr2>G}\ojnu\B^R蕤(o½&t!arKι!QGL#dk&rN܁`<ꀖ5+FX~>v JI\b| JDùI}wI9gj5 j:L4(j_6 ikWIE!]1&X~A⧆Ͼ5`OSI.~ݙ3̀&82$NF;q4CC$P3l2Ͼ)Yʷ!(5(@bH|a!H@kNLx-e< `VQk#dsʍh3n.iM UGĜ6A˔yAlB2F4$ߚ$z7v?+zc% ;R2m(4೚gn~w8wj%HQg$8~ǩ eo?̽%FHr`PpjkЭ6Z-I!~X2,s@XH@SK\\E?\١RFl.h&N.dVOF]l=Cβ`N>sY%>/{c2&uNYQb]Ģ?A4dB U),%Z>4G'ѥA>+5-2^ Q^Zɾj 13h%/ \KD h9et\=x| C=%¡Qɯ7k .ݏHipH1KƐ06owy\_s&mNd;jZu%*rN/,dz;S}mIZp/JF,ZzownӫZC\] _}bg]2xu|q97'|5{%-^K0Ӗ>>Oc(PHw:uiw=OW}6AC4CMKM nj䆧  'hMe S fk)^MCW[ozt` w7;70sCRD&$˃[~Ω|(zF)*oV։eO2$58%]s*Jon,ZxHYCF<zu=ƅ1 6AH系ΤkJ&8+gMxJ{tL)ΦXG=!U;nϡ)ݚExfPmt,} ;%Wgr /Usٍ X3j`gy݋l+tV=|lU>[X5zp_tZfPQǟmb[5ngV E.{(do2JsW6]A"fx48aTR)pj ]Ll5s"C* 2dBg4h b)#"ɘ䎗=jc#QE_f Q\ܨWBR1șQT@M9v/YSVͩ``h#Gl(Y)ɢ`lȑ&9[Xs#m!! %$ Ju~.I75*\~. r8rhJ,u7 ~3#㋲b \՗ $'2 s0"!VyFRd=ilz|S:?qX*(}ges6RAI ;H vEi7; S JHfc i2aFKtdBGId%Pc}\ u=e)W xiƥso%򚜨Yy4"\ET`Df4xQaBmYuopTD+|5TMus$3jq@ѓ2cQGK,[+8Y*nWKr>kN~7kJ|mޜc/{檝A%L5KOiyພT 6ܴ) ii40C{|Ҭ?%[UʢՉdJ4b|U@LdBzb MH$ǕB@@%m- (PG+FZhl_ZZ)*MVMK-M6ٷ|R&q,"")RVcw%RW$:[S%Kn@m$(J\BYT푨wsTnk `r'_*)%'`$RfIE &qJJ'd0xIIO+͕9č\*5\â)xGВ0h cj.|Ѐkq&OHUPS*wh2`:p 9-1ww!s]W{ۮlYxvRmWҶdG9"Υ=UŶB{ oXW7,ilV*EqE11OӬIF(lGF[a+*./2VJ}m%7TΝ&^OP:3DfW>(@oT B WI8d( ܖS6q BS X@ R 6K25gǼSt1I و 8bB[2Ԏ hjYb[{8THuG]1Qn 4ek]W*N#]|+ZCi~2ӳsEj7F](jsoh0`{)Xnvt/0&J"p4Gu2ͲLpj 7We-!Iǩvh%q*HRڣ ͳ"AytK]Qr%A6}NE^Ed$V12JՄL@C/zÑ39jRW'uHTTFڀK/Le$;60F-9mZJV%?k)dۋ6kbƘI,N-M+msB}ߧ|?߭[,[POVP3u؜k3zYq^WIyNӺi5=7 G/Fdq_lE@`Kcy*,7)%5J` zFw'9䊐+ߙR 輟C֯' ^W_6k0<_ȕ- "}s{&ټVzJijN RgZ5yBݨp"dY>lI3gQs>ٲܵK wkŢ5x R:fV[+3 6n#˃~tMoc[fiUC T>^nySi/A6惦h3D'-} UK j(Ƀtrz0].fۈe$k3Yl Q>Hjy]b@qԕY&E@ͺnB,u=DjEKLBCԳ\iO +DZde5g9/^#mRm;[ -١eOUW/P}>lL&G_jq"T̄bZgatN7 j/>XVu3BBLTTLi TPl'}%e@ȊĒB}82)[5 % CiOG|ي `m@>8JNdؔ8B[a騍-O;Cc|+.3hA\dr6]:zhro,y09f\yM9 ˖8'ٹd45X}UYUp6hYBׅ}YURTT4߽/,N}Zf?vzQI_\>B}Vr<0ߎ[(Cn|{pRZ-eGs7Jw%˽dJR7Gs=X Zٽ_spe<|vp7]5h!B5|[ ~WVۥmx9od+~.Y/|}Ф$^l\MQz0b^8x}`6DpWzwU ނ+o0hBDH֓*ockE˳B iZeo}w[:۸|ՑIZ9n UB`U,.74Аh^;euB Зf)?V14Pyr=Sΰ0'ӹ&}  >čbl]Z}ww)49%FkD PBc r}h6!аCPǙά_i9k3IKLڕ8( 5ܟmF-,\-K ;`na,%0f i5#Y.li}4ʞ\V-'4!@eN6@bLɝyT2D+/l@-PԁCm(ϝvwB@&\5*вxf}XJ khq3VR B3*䌿 zXZZ&^6L͕^W4 M.wޡHx2q)YHbk@j n=Em_4DH% Bڇ B .=eJ@BD єnz"r-oBug^d׏ jpc)NXlAۢT%idB$aȰPE$ %X%$~p>%DWH>MQL^(uNѭ7It=$ jXWe_<`#1xoӑsGLIʓ:'Z h 0֍z[Mq80JgǓy8O&D\_uvpz\Kjai΃ qױ[nF\ngn e|'XSqe+8ή>k~ev~vFp> t~~nݲ ]w@UE tO(ߩ5qF?ݻӓǻ@V1/`[no׷qWuyzͯ?skm|}o^.Ǡ}X~qwoGܑ婢(Eݾ\GqN t*KYGE̻A4Nj-P8"}8팲at7T>6[P䔀RbM^^Z&W/4?o$&k} :pY ~6*imYY+ (}5rp 73-uEޜ?bnSAS)+} @[_sEG0*&I:~w;Ts'/b[>?z&}6GU8wvjtU4,z] #+@.> l˩#$Qy}kߧa4jvCA=H[xjC⫅-~(ы^SEm~?uW\ttLG%8 u'N:\1O;&|`_73aqYKAuT84ȵ y2);k6;-np򶇧 5m7'Lb :t6qK[я VMZ?=bfe/ h5'*nt4.`k[N_8zՄ7q`{3y"{ T6..u33>{ٟv[ĬgZx8㑝j\5荝o$a<__HNe,2l$hPrPP|5SD^+%}Y(z6<*PѐZP>K[vޏ*>og0@ ny1k;96G$4{d0K3ުBpxg9>_`!AdJoHI^/z^l#V}V7n@3DEۃYd1{޶rsЕ{5(vuB,p0FжSW:kŐYҜӤa) ŵAԷF͹Es5/[e iI1rz8.v`:bQ )D6-%jvZq>5~xT8w/L#3s9#\2Xdb۹@USnmJX Lb"acJQ̐t-O9Wٹh5nx;6lrl&( odž{ ![ޱa5a~6(֌mҌњ#<-(uF-'dVkFۍ>!ގ 66P];6:63WnxTpB,(Q(DnԲnԲBoݨ QKb"s9=[a0C&(ˆ {;)MJ.*,)"~SI7SuQ:qIF0ysFܤ4!\"H 4a 8HZ0j¹F¼L`vENxۙ1LMޜڅ1 P))5-Yv΢̽>GZi-װ++G|',(⳨8 fP,^~٪Nb-~/dJ"L.98Q\fG|`J%K3AATDk1@HW{=Pq%3ENE#S*470cQKDZI3۰_+u8R&,Rv"&ئ*Ќ)qB%jkE]=#Fqkb}m`s¢XZmf- vKxF1 k%C Q!!C(-)b(dl_Kі&2AhpKDhôU&DY͞r"BqExP8OՒT\kIGʲ~w J*;,5hLuz 5unw, Sݺ蜱Uߞp~ ~D6so0;TVdN4z"󽿗4;WV))7ͨc8/Yr)`oV*CIdR'%4LsIņ*-R%40)RD%,+HyUx?X-JGjC -R g9A*!sZ6˚΃<86]ADDL4(دAȦbWծ 1g?p \qn6pkE6j|brud0 zf'ٹYfŇAғjUnM xEO-)zsЪ]eu+ZȫQmX ;{9ES 96Nm4:p"n5bހƂTލ(2I>4xᮔw/*7{c,nPk"LIAc icL}{ 㑋AKNC.p1ڽAX%0+ :KVTksG &ժgH>Y TxrᎇS}'sϺ;͊_\0=pT?+\9(3ƾ`g;Zqɋ& |˔߁t|0b )iq*ʌ~:E2%hMUt%Mb۪2:>c㬐Tޛu[NM y*zwsZISPc0ЍU=|bSaԈ],$'5v@!%(+voJR8Y0FÎhR1pp-"p.C g_e^ttOn_6BQ=8(yHrwYMll\mF 5.*&6cvMvy85bJߐC7~ŰS:q vr䞙p91HĘ^2PE2e!y-)gd`4:]ÎV>\ 7aL}}uQwDc:$ҝ Ɏ{7_ UXK%XVVB^SuUį~/IuJC%WD=IZ U輊 l*3P< u+о@ oBQoLZ^yx~iv¯K@1*=yqYQ7Ōܲ_Q2uc-ljN1ѡeƠGUMq$]`>dI$M 5E<(*g% dƱ%C`g@H2\⠏.^LH1@W*Iҏ/pV—+1!i$c"E|lĔ 6R_OCsc<HH<E#0G_bpdhIƹpc0|4ShtDK£i=ե8r J1RbjMHh"94<b (&qHg 'Ѳ;&sՠmYʢX1~ T5ҏ9iנLez֣3J|):K*Yz!1DDh V*Da 1SA6*lnpk?M}۝$?m] 4tS·,_߾<64tO? & . \ ݴ Z]]G7 ۃ(eV}DŽ7#,˵P }q;`4HRh`Bw7٫,I,xGpMt]NS͝Nwn/;,G) oZY8RZ T\W DObsa$&7 zң<^s@䣼fNz yiTdvdn0a;%*Ȯs#dl_uYG ENsxJ\Kź#q ݪw 5.*Ul8EN ƋxTN (s; hŮ3 AHg/$W@}!Dc) S]rJFA0\&^C9V^[.8%cE+`I|¯fq>/!$cݴ̲B2`FVR>\&jvۭvڠM˺?~'ZA.' [3o ݫP 0`0 OƜ3RLzPq<H{(ڮ47P]|,/jҳb '1czר-/%Rq.iStHҿ7XHB& =,dq5cʑ xqǑ1 F2JOx0Jl>_݉uOf",=G.-5K\\ /{/f V2i0b&b %CP#HH)!%.:bfjAlh _or@H-ZZ$Y(\!.ȋ*,EpǁRS<&X-)p\H #FԌ`Y. LU!1* *L)%1؅@ e""(^(M!1X p[-!'0Ra+ZH.=c$|H 8UR`DZbN=բn81"vUF \0rL++( F]`1' 2T `%Nq\ ʪax{ σP8Fz 8F!INO`Oћmr)ZMee;zCAMit==d_^]if>^rLƕFD3UWeq.NZW.(4:D9B=KJOiqѕF`1KM}6Fŷ!bWEŧvbwB6JL0n$9UrZ x>,w5ˤ>٦/My`rK9$d;h.njtp2:{^+m Bi:d<ҥYՆHR[~Ixok F޸&x&rU5w~yX7A`o} ip7d}FI u>umPhzXA {Wu{"p& T8d[?X1ՙY\Ҋr;pM Pr.:򰌰`1  y&ԗb>L9(@, 8bZ zsUwOb`&y6l&N!)T좜ٖI'a=ʀEZ*&*܃> !b'}FMY͝5CCDuQiKDY5o! AԥgU?t oGX cY^WmQ뱨ZM" KHTs jJ8Ty%ePKUC9![W ZoUϿ1NZRD*6?:C&+nmOR(yxA[3Q1Q7%''#'X (/4cBO*b]%UaDZ|fwwPxՏ.B"\VDG"L\L/U]HיG.E8R"ǫT]Jm*U{[ݫQPzWzҀmg$XEMA7!0T*1c7 v(,vzNrrsd )"#n7ftuZ&CþxεD6L?ëQ7ǣӣ7 Ѩ<9D6M%:I3)aFA6!wA_Af3نuޟaΔ؛]Z&0HS39=0`J=&B D4"u\JRb/ pJON;{ ǝQ2@A2GUfAI߰Uߠj`,RXlLW~#<8Br1'8 #MR fM3%G̣qǜ.A%;̓:r2"Kv vL-5L\¸>yLm1 'Is} "77q`);QeÒxq**DtbaT6Tv;"nuPǀˮQ!b2'YCT[rTOeX~J<=LjѸPq_ $OWH BaDl 1P`̛ܙ^$I=?`)C.YHiQrDx COH-d(f1hD@o()f*.9t .1y8&uEd&0])iw'k;L `^li ^^y2Xh!` +GBT .yY(L)dt%&&$6wh8;ӁeSΞVdIe4T| &c4HR*gs{\a_]92@sIN|XGՆ{sz=ds1ϬH!bfXd(H`e{/z-:mJ{EdƾiTv閍LAbVڿAya{0iyel83n^w#sY͒]{Hk3w?uMPz^ CIlk $/UZ@jtFQcWo`r{ymܬxQ܂~dܨ*1t /mfOK(aVڎhlٞ f@_O͖!5!(UnTP'gD}uC"ɑrV5fAS7`˭֭a[7{wK? >փx88c?t[L?U{< FS̍o/p?oԞ/on>V 6j .Z%nqTVnA ܇q+[B*{K0Y1`~J-hci\wg;Ƕ>۳۴7Wg7oH{{۳qoWQ2oW19m`}>{^& }g;oo<$}4\>d a 5^۹o60Uxil=D~|=iA}ӋW ?2aF^$_}(H{sk˅ҋV lR{;xulz"Qp>pt)'dΞҀ;Sr?)ڿc˛VOTm@2(C_O&Af40󇙕5qwLN|ss&Nݜ_^"zsuy$xukSpShr'KvMNǯ-z| ]˿7S.\@d-s.!bp"0r2}VxWx봃Yw?<6j,:p^Sr?dBćW ?o^^_`0P*vwIBո{6|5׽!Lxm=8)u:\ T>͠7K`Z`U^l7RˤV1y~6 _@ w JonHgkeS"C]::I!Jg* P!{ߺA_,!im:wOʊk]˶OMMy aB{: y@ ߋy50+(L}aYb58:s|3ZZвMɾuogYSDQdY aJ G) .#:ےu^k\{ɧwAT$Gz~uew)NQe'dr审+~ > !'em_Jğ}Zn^NuG1 ,j2قhիh68|ehxD$r ZM({%zBdy> I" B*yDq I ę)@h4&:f&Vsy`":l/&Z>|$]!V޳neS%Tyzu1 K;^nuji`{FoMJcYA)T4Ms BkX  ^[MP ǂLi}iKJZqW@ ^WpAVDT9F( Ez:AݓjB; ',R󣵶var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005560425415144643424017717 0ustar rootrootFeb 16 14:53:24 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 14:53:24 crc restorecon[4699]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.209675 4705 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217862 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217916 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217928 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217939 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217953 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217965 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217977 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217987 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217997 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218008 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218018 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218031 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218043 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218053 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218063 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218072 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218081 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218090 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218100 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218109 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218118 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218127 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218135 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218143 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218151 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218159 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218166 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218177 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218187 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218196 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218206 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218214 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218223 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218249 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218258 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218266 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218274 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218283 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218291 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218299 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218308 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218316 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218325 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218333 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218342 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218350 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218359 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218399 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218407 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218415 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218422 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218432 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218440 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218447 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218456 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218463 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218472 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218480 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218487 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218495 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218503 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218511 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218521 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218529 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218537 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218545 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218553 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218562 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218570 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218578 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218586 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218814 4705 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218840 4705 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218860 4705 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218873 4705 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218884 4705 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218894 4705 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218906 4705 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218917 4705 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218926 4705 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218935 4705 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218945 4705 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218956 4705 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218965 4705 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218974 4705 flags.go:64] FLAG: --cgroup-root="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218983 4705 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218992 4705 flags.go:64] FLAG: --client-ca-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219001 4705 flags.go:64] FLAG: --cloud-config="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219010 4705 flags.go:64] FLAG: --cloud-provider="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219022 4705 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219035 4705 flags.go:64] FLAG: --cluster-domain="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219043 4705 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219053 4705 flags.go:64] FLAG: --config-dir="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219062 4705 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219090 4705 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219111 4705 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219121 4705 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219130 4705 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219141 4705 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219150 4705 flags.go:64] FLAG: --contention-profiling="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219159 4705 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219168 4705 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219178 4705 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219187 4705 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219198 4705 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219207 4705 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219216 4705 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219225 4705 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219234 4705 flags.go:64] FLAG: --enable-server="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219243 4705 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219256 4705 flags.go:64] FLAG: --event-burst="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219265 4705 flags.go:64] FLAG: --event-qps="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219273 4705 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219282 4705 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219293 4705 flags.go:64] FLAG: --eviction-hard="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219304 4705 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219314 4705 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219323 4705 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219333 4705 flags.go:64] FLAG: --eviction-soft="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219343 4705 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219353 4705 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219363 4705 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219399 4705 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219408 4705 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219418 4705 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219427 4705 flags.go:64] FLAG: --feature-gates="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219438 4705 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219447 4705 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219456 4705 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219465 4705 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219475 4705 flags.go:64] FLAG: --healthz-port="10248" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219484 4705 flags.go:64] FLAG: --help="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219493 4705 flags.go:64] FLAG: --hostname-override="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219502 4705 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219511 4705 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219520 4705 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219530 4705 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219538 4705 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219548 4705 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219556 4705 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219566 4705 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219575 4705 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219584 4705 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219594 4705 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219602 4705 flags.go:64] FLAG: --kube-reserved="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219612 4705 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219621 4705 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219630 4705 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219638 4705 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219647 4705 flags.go:64] FLAG: --lock-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219656 4705 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219665 4705 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219674 4705 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219688 4705 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219699 4705 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219708 4705 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219717 4705 flags.go:64] FLAG: --logging-format="text" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219726 4705 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219735 4705 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219744 4705 flags.go:64] FLAG: --manifest-url="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219752 4705 flags.go:64] FLAG: --manifest-url-header="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219764 4705 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219773 4705 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219784 4705 flags.go:64] FLAG: --max-pods="110" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219794 4705 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219803 4705 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219813 4705 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219822 4705 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219831 4705 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219840 4705 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219849 4705 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219869 4705 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219878 4705 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219887 4705 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219896 4705 flags.go:64] FLAG: --pod-cidr="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219906 4705 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219919 4705 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219928 4705 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219937 4705 flags.go:64] FLAG: --pods-per-core="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219946 4705 flags.go:64] FLAG: --port="10250" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219956 4705 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219965 4705 flags.go:64] FLAG: --provider-id="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219973 4705 flags.go:64] FLAG: --qos-reserved="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219982 4705 flags.go:64] FLAG: --read-only-port="10255" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219991 4705 flags.go:64] FLAG: --register-node="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220000 4705 flags.go:64] FLAG: --register-schedulable="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220009 4705 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220024 4705 flags.go:64] FLAG: --registry-burst="10" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220033 4705 flags.go:64] FLAG: --registry-qps="5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220042 4705 flags.go:64] FLAG: --reserved-cpus="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220052 4705 flags.go:64] FLAG: --reserved-memory="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220063 4705 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220073 4705 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220082 4705 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220090 4705 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220117 4705 flags.go:64] FLAG: --runonce="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220126 4705 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220135 4705 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220145 4705 flags.go:64] FLAG: --seccomp-default="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220153 4705 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220162 4705 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220172 4705 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220182 4705 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220191 4705 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220200 4705 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220209 4705 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220218 4705 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220227 4705 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220236 4705 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220246 4705 flags.go:64] FLAG: --system-cgroups="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220254 4705 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220268 4705 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220277 4705 flags.go:64] FLAG: --tls-cert-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220286 4705 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220297 4705 flags.go:64] FLAG: --tls-min-version="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220306 4705 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220315 4705 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220334 4705 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220343 4705 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220353 4705 flags.go:64] FLAG: --v="2" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220364 4705 flags.go:64] FLAG: --version="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220398 4705 flags.go:64] FLAG: --vmodule="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220409 4705 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220419 4705 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220639 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220650 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220661 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220671 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220680 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220691 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220701 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220712 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220720 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220729 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220738 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220747 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220755 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220763 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220771 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220780 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220788 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220797 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220805 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220813 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220821 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220828 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220836 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220844 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220851 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220871 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220879 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220887 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220895 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220902 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220910 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220917 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220925 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220932 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220940 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220948 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220956 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220964 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220973 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220981 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220989 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221000 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221010 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221018 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221027 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221035 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221043 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221051 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221061 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221071 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221079 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221087 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221096 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221103 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221111 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221118 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221127 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221140 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221148 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221155 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221163 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221171 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221179 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221187 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221194 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221202 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221209 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221217 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221227 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221237 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221244 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.221257 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230380 4705 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230401 4705 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230468 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230475 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230481 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230486 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230490 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230494 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230498 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230501 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230505 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230509 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230513 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230516 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230520 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230523 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230527 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230530 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230534 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230538 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230542 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230545 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230549 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230553 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230556 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230560 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230564 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230568 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230571 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230576 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230580 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230583 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230588 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230592 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230595 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230599 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230604 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230607 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230611 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230615 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230618 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230622 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230625 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230629 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230633 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230636 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230640 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230644 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230647 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230652 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230657 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230661 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230665 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230669 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230673 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230677 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230681 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230685 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230689 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230694 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230698 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230702 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230706 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230710 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230714 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230717 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230721 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230724 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230728 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230732 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230735 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230740 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230746 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230751 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230877 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230882 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230886 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230889 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230893 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230897 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230901 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230904 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230908 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230912 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230916 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230919 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230923 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230926 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230930 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230934 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230937 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230940 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230944 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230948 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230952 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230957 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230961 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230965 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230969 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230973 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230976 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230980 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230984 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230988 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230991 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230995 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230999 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231002 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231006 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231010 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231014 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231018 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231021 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231025 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231028 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231032 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231036 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231039 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231043 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231046 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231050 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231053 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231057 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231061 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231064 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231069 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231074 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231079 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231084 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231089 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231093 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231097 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231101 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231105 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231109 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231113 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231117 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231122 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231126 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231130 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231133 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231137 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231141 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231145 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231149 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.231155 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.231309 4705 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.235297 4705 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.235388 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237002 4705 server.go:997] "Starting client certificate rotation" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237023 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237267 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 14:08:53.361911321 +0000 UTC Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237430 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.261181 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.264422 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.264429 4705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.281023 4705 log.go:25] "Validated CRI v1 runtime API" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.310749 4705 log.go:25] "Validated CRI v1 image API" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.312047 4705 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.318120 4705 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-14-48-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.318161 4705 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334250 4705 manager.go:217] Machine: {Timestamp:2026-02-16 14:53:26.330214515 +0000 UTC m=+0.515191611 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e0a92891-331c-4cfd-852e-c93d09da3492 BootID:c4ce382a-96e5-4027-9451-936b39edc61d Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:62:bb:f1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:62:bb:f1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:dc:57:22 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fc:3a:f4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ef:d2:79 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:28:52:05 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d5:90:d1:c5:6f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:62:24:5b:c6:5a:e0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334522 4705 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334671 4705 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337047 4705 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337562 4705 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337621 4705 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337955 4705 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337975 4705 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.338634 4705 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.338689 4705 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.339945 4705 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.340091 4705 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344723 4705 kubelet.go:418] "Attempting to sync node with API server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344760 4705 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344823 4705 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344853 4705 kubelet.go:324] "Adding apiserver pod source" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344877 4705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.348509 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.348583 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.348698 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.348788 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.349794 4705 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.350767 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.354563 4705 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356590 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356631 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356645 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356658 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356749 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356766 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356781 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356803 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356818 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356832 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356851 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356864 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.357767 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.358588 4705 server.go:1280] "Started kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.361274 4705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.361957 4705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.362114 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.363364 4705 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364569 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364602 4705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364632 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:45:41.504749566 +0000 UTC Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372594 4705 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372621 4705 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372725 4705 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.372964 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.373698 4705 factory.go:55] Registering systemd factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.373872 4705 factory.go:221] Registration of the systemd container factory successfully Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.373877 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374463 4705 factory.go:153] Registering CRI-O factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374499 4705 factory.go:221] Registration of the crio container factory successfully Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.374461 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.374597 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374611 4705 server.go:460] "Adding debug handlers to kubelet server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374618 4705 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374846 4705 factory.go:103] Registering Raw factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374879 4705 manager.go:1196] Started watching for new ooms in manager Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.373921 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1c53e217a88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,LastTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.376620 4705 manager.go:319] Starting recovery of all containers Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383847 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383945 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383972 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383992 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384010 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384028 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384046 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384064 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384086 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384106 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384159 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384179 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384241 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384349 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384420 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384452 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384477 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384503 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384530 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384557 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384585 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384612 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384640 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384669 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384693 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384719 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384761 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384794 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384838 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384868 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384899 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384931 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384958 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384984 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385012 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385039 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385067 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385094 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385122 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385155 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385189 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385218 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385247 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385275 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385351 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385410 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385440 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385467 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385493 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385520 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385548 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385577 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385624 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385685 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385717 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385748 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385775 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385801 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385830 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385856 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385883 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385909 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385936 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385964 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385988 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386013 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386038 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386064 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386091 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386116 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386144 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386173 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386201 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386232 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386258 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386285 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386313 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386341 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386434 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386468 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386499 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386527 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386552 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386580 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386607 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386632 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386661 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386686 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386711 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386738 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386764 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386791 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386815 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386840 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386866 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386891 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386915 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386945 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386973 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386999 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387024 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387048 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387077 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387113 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387142 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387171 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387199 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387226 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387252 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387280 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387306 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387333 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387358 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387450 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387480 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387507 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387538 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387563 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387592 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387621 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387646 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387672 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387697 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387724 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387752 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387779 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387804 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387831 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387861 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387887 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387914 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387942 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387967 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387992 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388020 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388046 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388072 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388100 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388150 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388178 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388207 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388233 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388259 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388286 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388317 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388343 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388403 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388435 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388462 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388488 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388518 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388544 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388570 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388599 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388628 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388684 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388710 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388738 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388766 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388792 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388825 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388854 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388883 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393419 4705 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393478 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393505 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393520 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393538 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393554 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393567 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393582 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393595 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393615 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393626 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393636 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393667 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393683 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393695 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393707 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393722 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393769 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393786 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393799 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393815 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393831 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393844 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393861 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393874 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393890 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393906 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393919 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393936 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393949 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393962 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393976 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393991 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394010 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394026 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394042 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394059 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394074 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394090 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394104 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394117 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394133 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394143 4705 reconstruct.go:97] "Volume reconstruction finished" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394150 4705 reconciler.go:26] "Reconciler: start to sync state" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.411401 4705 manager.go:324] Recovery completed Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.415046 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417255 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417490 4705 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417678 4705 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.418244 4705 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.418309 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.418669 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.430855 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433252 4705 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433271 4705 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433290 4705 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.447447 4705 policy_none.go:49] "None policy: Start" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.448391 4705 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.448428 4705 state_mem.go:35] "Initializing new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.474347 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508753 4705 manager.go:334] "Starting Device Plugin manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508814 4705 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508831 4705 server.go:79] "Starting device plugin registration server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509343 4705 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509379 4705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509720 4705 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509821 4705 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509832 4705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.518498 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.518576 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519964 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.520155 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.520214 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521322 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521614 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521726 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522196 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522317 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523662 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523845 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523875 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.524094 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525756 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.574028 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596430 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596447 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596466 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596531 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596597 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.610902 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.614645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.614934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.615075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.615226 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.616093 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697611 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697653 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697712 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697733 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697774 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698263 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698347 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698411 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698429 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698331 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.817225 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.819000 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.819404 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.878898 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.901289 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.908429 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.915879 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c WatchSource:0}: Error finding container 183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c: Status 404 returned error can't find the container with id 183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.926167 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3 WatchSource:0}: Error finding container fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3: Status 404 returned error can't find the container with id fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3 Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.927021 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc WatchSource:0}: Error finding container 84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc: Status 404 returned error can't find the container with id 84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.930869 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.935770 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.950507 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0 WatchSource:0}: Error finding container 5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0: Status 404 returned error can't find the container with id 5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0 Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.975692 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.220447 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222430 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.222868 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.363518 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.365519 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:39:54.309802767 +0000 UTC Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.396538 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.396620 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.411988 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.412065 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.423109 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.426289 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.427325 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.428280 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.429136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b306ad403b24de209f4328e7c904434e6a863cc98493518aabf86d03063c04d5"} Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.697212 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1c53e217a88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,LastTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.731710 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.731813 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.776937 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.949646 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.949740 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.023792 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026301 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:28 crc kubenswrapper[4705]: E0216 14:53:28.027202 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.363959 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.365964 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:53:57.233856948 +0000 UTC Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435528 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435910 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.436802 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: E0216 14:53:28.438478 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.438726 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.438919 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.439336 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.439778 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.440671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.440869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.443914 4705 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.443999 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.444711 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.446559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447986 4705 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a2d206ac5a36eaa4c99c4801a3e9a925a34a396ca196663bf0cf2fac451726d0" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.448141 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a2d206ac5a36eaa4c99c4801a3e9a925a34a396ca196663bf0cf2fac451726d0"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.448250 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449831 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453281 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453441 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.103543 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.103639 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.362852 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.366942 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:31:08.564018077 +0000 UTC Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.377890 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.460785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5b947f41146cb72121d65dd9fbf450be2466414f7e51fcd4b73c8bc1f5d78979"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.460866 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469333 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469425 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469442 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473358 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb" exitCode=0 Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473500 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473532 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.478269 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.478847 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479194 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479233 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.508956 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.509061 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.627608 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628993 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.629342 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.755554 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.755662 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.367842 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:47:03.240809993 +0000 UTC Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.490091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d"} Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.490175 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494243 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf" exitCode=0 Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494355 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494425 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494468 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf"} Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494519 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496009 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.368876 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:06:33.786317613 +0000 UTC Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507734 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507783 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507816 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507824 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.087490 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.369638 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:52:44.967787084 +0000 UTC Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b"} Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519307 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081"} Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519318 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519418 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519440 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.821494 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.830285 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.831959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832025 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832059 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.370649 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:18:23.414796318 +0000 UTC Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.425838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.426278 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.472892 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.511407 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523021 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523111 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523178 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523253 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.371109 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:17:14.930951777 +0000 UTC Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.577746 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.578040 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.579932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.579988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.580016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.618850 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.619096 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4705]: I0216 14:53:35.371313 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:52:01.537443827 +0000 UTC Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.243234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.243504 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.251939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.372063 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:54:24.027185085 +0000 UTC Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.426325 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.426524 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 14:53:36 crc kubenswrapper[4705]: E0216 14:53:36.524898 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.532557 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.532773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.533881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.533935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.534017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.666273 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.666636 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.372667 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:00:53.161449808 +0000 UTC Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.535939 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.542931 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.373589 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:18:13.367939656 +0000 UTC Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.542015 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4705]: I0216 14:53:39.374419 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:40:38.115672971 +0000 UTC Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.366426 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.374776 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:14:31.640413432 +0000 UTC Feb 16 14:53:40 crc kubenswrapper[4705]: W0216 14:53:40.452825 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.452943 4705 trace.go:236] Trace[1731117463]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:30.450) (total time: 10002ms): Feb 16 14:53:40 crc kubenswrapper[4705]: Trace[1731117463]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:53:40.452) Feb 16 14:53:40 crc kubenswrapper[4705]: Trace[1731117463]: [10.002480553s] [10.002480553s] END Feb 16 14:53:40 crc kubenswrapper[4705]: E0216 14:53:40.452980 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.708444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.708794 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710404 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.851142 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.851247 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.865435 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.865502 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:41 crc kubenswrapper[4705]: I0216 14:53:41.375192 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:07:25.511782363 +0000 UTC Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.092991 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]log ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]etcd ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiextensions-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/crd-informer-synced ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/bootstrap-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-registration-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]autoregister-completion ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: livez check failed Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.093817 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.375553 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:15:46.507731422 +0000 UTC Feb 16 14:53:43 crc kubenswrapper[4705]: I0216 14:53:43.375701 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:54:49.204180321 +0000 UTC Feb 16 14:53:44 crc kubenswrapper[4705]: I0216 14:53:44.376700 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:00:53.653682345 +0000 UTC Feb 16 14:53:44 crc kubenswrapper[4705]: I0216 14:53:44.846102 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.376860 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:46:14.417143106 +0000 UTC Feb 16 14:53:45 crc kubenswrapper[4705]: E0216 14:53:45.867437 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 14:53:45 crc kubenswrapper[4705]: E0216 14:53:45.869112 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870177 4705 trace.go:236] Trace[519180976]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:34.905) (total time: 10964ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[519180976]: ---"Objects listed" error: 10964ms (14:53:45.870) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[519180976]: [10.964476342s] [10.964476342s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870208 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870235 4705 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870279 4705 trace.go:236] Trace[1909566368]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:35.557) (total time: 10312ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[1909566368]: ---"Objects listed" error: 10312ms (14:53:45.870) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[1909566368]: [10.312639122s] [10.312639122s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870299 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.873315 4705 trace.go:236] Trace[736709864]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:34.183) (total time: 11689ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[736709864]: ---"Objects listed" error: 11689ms (14:53:45.873) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[736709864]: [11.689395657s] [11.689395657s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.873351 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.874045 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.898547 4705 csr.go:261] certificate signing request csr-pvx5v is approved, waiting to be issued Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.907107 4705 csr.go:257] certificate signing request csr-pvx5v is issued Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911602 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57022->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911657 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57022->192.168.126.11:17697: read: connection reset by peer" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911609 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49578->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911711 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49578->192.168.126.11:17697: read: connection reset by peer" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.237112 4705 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237401 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237431 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.237352 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.47:46132->38.102.83.47:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894c1c560339cfe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.930160894 +0000 UTC m=+1.115137970,LastTimestamp:2026-02-16 14:53:26.930160894 +0000 UTC m=+1.115137970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237536 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.355822 4705 apiserver.go:52] "Watching apiserver" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359326 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359556 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359940 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360202 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360279 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360528 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360642 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360634 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360712 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360852 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.364775 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.365235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.365448 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367677 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367759 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367807 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367861 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367931 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367958 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.374039 4705 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.377699 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:18:54.185516276 +0000 UTC Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.401811 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.419008 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.427134 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.427198 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.429240 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.438115 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.447671 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.460251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.472180 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474394 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474643 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474666 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474687 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474830 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474889 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474977 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475000 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475021 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475129 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475165 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475198 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475218 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475237 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475267 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475282 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475314 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475346 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475362 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475424 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475441 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475479 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475495 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475539 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475606 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475744 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475787 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475803 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475818 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475849 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475882 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475900 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475887 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475925 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476024 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476051 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476059 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476169 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476196 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476224 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476249 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476280 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476364 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476463 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476456 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476530 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476580 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476618 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476653 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476680 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476694 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476704 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476778 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476797 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476806 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476788 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477015 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477020 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477076 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477206 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477280 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477311 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477458 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477618 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477749 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478209 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478248 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478279 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478527 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478693 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478799 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478877 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479653 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479794 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480099 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480122 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480965 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481091 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476806 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481717 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481789 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481856 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482055 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482203 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482278 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482318 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482414 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482461 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482480 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482497 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482516 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482552 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482571 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482590 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482608 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482625 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482642 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482660 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482678 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482700 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482715 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482531 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482853 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483463 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483725 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484088 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484167 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484415 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484975 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485463 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485474 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485496 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485517 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485544 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485557 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485594 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485613 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485630 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485648 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485682 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485686 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485696 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485764 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486072 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486148 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486256 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486815 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485783 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486994 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487012 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487050 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487064 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487088 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487312 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487500 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487562 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487622 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487712 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487742 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487864 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488074 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488218 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488639 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488867 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.490121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.490363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492041 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492261 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492680 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493045 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493341 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493644 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493684 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493719 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493827 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493872 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493910 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494124 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494195 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494232 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494273 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494306 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494340 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494599 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494633 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494941 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494995 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495037 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495075 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495093 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495144 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495228 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495269 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495394 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495409 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495447 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495533 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495574 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495612 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495789 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495831 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495870 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496011 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496051 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496089 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496286 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496428 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496476 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496694 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496747 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496791 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497565 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497584 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497600 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497618 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497634 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497649 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497666 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497681 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497696 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497712 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497727 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497744 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497760 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497775 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497788 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497800 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497813 4705 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497831 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497846 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497859 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497873 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497888 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497901 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497917 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497930 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497943 4705 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497956 4705 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497970 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497983 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497996 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498009 4705 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498026 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498039 4705 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498054 4705 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498068 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498082 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498096 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498109 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498122 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498135 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498148 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498160 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498174 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498187 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498200 4705 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498213 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498227 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498240 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498620 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498637 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498650 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498664 4705 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498676 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498694 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498706 4705 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498720 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498733 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498746 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498763 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498776 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498789 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498805 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498818 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498832 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498845 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498859 4705 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498871 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498884 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498896 4705 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498910 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498924 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498937 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498949 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498962 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498976 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498989 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499002 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499017 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499031 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499048 4705 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499070 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499095 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499113 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499130 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499147 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499166 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499182 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499200 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499216 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499238 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499256 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499274 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499289 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499303 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499318 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499331 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499344 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499359 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499397 4705 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.501616 4705 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.502923 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495471 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495734 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496953 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497016 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496996 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.499980 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.500697 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.000669789 +0000 UTC m=+21.185646865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.503132 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.503462 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.506886 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.504604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.504806 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.504573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.507001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.507085 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.007048036 +0000 UTC m=+21.192025122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.507163 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.007136469 +0000 UTC m=+21.192113555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.507647 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.510828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511117 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511140 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511156 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.511189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511243 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.011218292 +0000 UTC m=+21.196195378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512709 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512736 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512750 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512841 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.012823506 +0000 UTC m=+21.197800812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.513985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515052 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515229 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515574 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516141 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516288 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516455 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516784 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516852 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516971 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517234 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517287 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517431 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517517 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517715 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518726 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.519056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.519574 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.523513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.524198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.524319 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.525964 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526447 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526619 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526847 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527192 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527287 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527733 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528179 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528582 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.529047 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.529863 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.530492 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.530948 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.531180 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.531792 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.532835 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.532844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.534499 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.534921 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535382 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535962 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536208 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536628 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536732 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536762 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537422 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538326 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538421 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538544 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538831 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539148 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539249 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.541238 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.541188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.544397 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.546133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.552360 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.554743 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.560501 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.566532 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.567911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.568141 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" exitCode=255 Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.568185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d"} Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.578859 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.589689 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.593272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600274 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600741 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600752 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600761 4705 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600771 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600780 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600790 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600800 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600810 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600821 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600833 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600876 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600843 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600922 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600946 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600964 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600989 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601003 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601018 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601033 4705 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601047 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601060 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601074 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601088 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601118 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601132 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601146 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601161 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601175 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601193 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601207 4705 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601222 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601236 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601254 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601269 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601285 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601298 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601318 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601334 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601350 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601404 4705 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601421 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601435 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601449 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601463 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601476 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601490 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601505 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601518 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601531 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601545 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601559 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601572 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601586 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601599 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601612 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601624 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601638 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601653 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601670 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601687 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601700 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601714 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601729 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601743 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601756 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601769 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601782 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601795 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601825 4705 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601842 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601857 4705 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601872 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601887 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601902 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601917 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601932 4705 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601948 4705 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601962 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601977 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601992 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602007 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602022 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602036 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602051 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602066 4705 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602080 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602096 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602113 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602128 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602145 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602160 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602177 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602192 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602214 4705 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602230 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.607184 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.608660 4705 scope.go:117] "RemoveContainer" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.627686 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.648013 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.659244 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.675128 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.685209 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.687964 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.695133 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.701109 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.703867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.716078 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.728757 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.908224 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 14:48:45 +0000 UTC, rotation deadline is 2026-12-22 12:34:35.477626854 +0000 UTC Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.908273 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7413h40m48.569355685s for next certificate rotation Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.005607 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.005811 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.005777402 +0000 UTC m=+22.190754678 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.093251 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106319 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106617 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106640 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106650 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106717 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106683159 +0000 UTC m=+22.291660235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106809 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106815 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106859 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106872 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106924 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106909485 +0000 UTC m=+22.291886561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106806 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106963 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106957337 +0000 UTC m=+22.291934413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106831 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106978 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.107002 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106996758 +0000 UTC m=+22.291973834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.120960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.132797 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.146393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.183281 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.211204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.235920 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.378653 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:41:45.847949411 +0000 UTC Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.419321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.419477 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.571866 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.571912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"13ba08b7aaa7aa92e52ddd42a7da43c1bb3f0bb40d70492599afb29d0b335469"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.574492 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.576044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.576972 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.583721 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"735f0d146eef10de2a44400745b87e04a1f33bf2d095ec441be4a9c3c9c89be2"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.584415 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.587943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.587991 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.588003 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4a26a9b10f6414261afe596837cbbf3b60cf6df49b031411d434d212e832bfee"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.594563 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.616028 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.629206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.642302 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.654112 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.666814 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.680349 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.699021 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.710255 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.715765 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2ljf7"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.716131 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.717690 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718421 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bflhj"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718573 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fnnf4"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rwkxz"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719759 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719935 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719959 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722198 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722275 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722784 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.723492 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.725257 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.725753 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726121 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726340 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726414 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726473 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726393 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726486 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726541 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726653 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726723 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726744 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726758 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.727079 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.744464 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.756479 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.769992 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.785948 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.809057 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814406 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814468 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814504 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814572 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814640 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814668 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814838 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814984 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815003 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815040 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815059 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815114 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815131 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815163 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815286 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815326 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815348 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815390 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815430 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815459 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815479 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815520 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815562 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.830013 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.842937 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.855133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.873463 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.891118 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916493 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916573 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916588 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916602 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916660 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916754 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916807 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916825 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916978 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917006 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917020 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917035 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917177 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917221 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917748 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919132 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919379 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919583 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919716 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920251 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920512 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920730 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920747 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920779 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921103 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921222 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921824 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.929883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.936495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.943861 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.944078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.950851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.953920 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.957040 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.960558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.982725 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.996330 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.008142 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.017536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.017611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.017770 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.017751275 +0000 UTC m=+24.202728341 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.027409 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.031709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ljf7" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.042474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.052909 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.058301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.065812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:48 crc kubenswrapper[4705]: W0216 14:53:48.093074 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f92e3ed_2ba8_4202_a1b8_7350fadc1d8c.slice/crio-a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793 WatchSource:0}: Error finding container a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793: Status 404 returned error can't find the container with id a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793 Feb 16 14:53:48 crc kubenswrapper[4705]: W0216 14:53:48.093829 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55f9230c_7ded_46f1_babb_eba339b0ca6c.slice/crio-ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19 WatchSource:0}: Error finding container ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19: Status 404 returned error can't find the container with id ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118316 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118424 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118436 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118457 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118468 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118425 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118511 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118498637 +0000 UTC m=+24.303475713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118506 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118593 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118570069 +0000 UTC m=+24.303547205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118603 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118612 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118641 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.11860279 +0000 UTC m=+24.303579966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118663 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118654122 +0000 UTC m=+24.303631298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.379729 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:10:29.574826764 +0000 UTC Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.419240 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.419331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.419464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.419518 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.424115 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.425534 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.426977 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.427808 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.429227 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.429931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.430732 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.432526 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.433285 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.434476 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.435090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.436570 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.437199 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.437940 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.439173 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.440590 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.441940 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.442539 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.444401 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.445237 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.445931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.448464 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.449031 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.450473 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.450932 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.452151 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.452894 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.453795 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.454447 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.454990 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.456166 4705 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.456313 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.458189 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.459278 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.459796 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.461915 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.463235 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.463942 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.465469 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.466479 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.467090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.468314 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.473764 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.474481 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.474975 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.475538 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.476214 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.476962 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.477485 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.477981 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.478469 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.479012 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.479594 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.480102 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.592302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.592738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"bebcfc949c7b1affe236f7ab803679c4e2f0ba3699014c926fd5504ebfd97dac"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594037 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594049 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595645 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e" exitCode=0 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595704 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerStarted","Data":"347468729d5581dc8fbc6dfd3995d34234764644b295ce5318e33b2927ac1908"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.597268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bflhj" event={"ID":"55f9230c-7ded-46f1-babb-eba339b0ca6c","Type":"ContainerStarted","Data":"fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.597413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bflhj" event={"ID":"55f9230c-7ded-46f1-babb-eba339b0ca6c","Type":"ContainerStarted","Data":"ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598406 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" exitCode=0 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598530 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"42045b84aca42a832078848d2b0993c882266e872a0d71d75f9c0c7f12bd5a14"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.609643 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.622544 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.641788 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.655770 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.671207 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.685153 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.699053 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.713345 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.731051 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.750570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.771222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.786233 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.798001 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.812299 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.837688 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.851908 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.885857 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.930573 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.961547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.982450 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.993311 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.007805 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.020230 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.037045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.380567 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:06:22.399383689 +0000 UTC Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.419068 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:49 crc kubenswrapper[4705]: E0216 14:53:49.419203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.602791 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7" exitCode=0 Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.602862 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605902 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605944 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605952 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605960 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.606975 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.622984 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.649018 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.685023 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.700666 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.713009 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.731204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.755539 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.771524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.784836 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.803954 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.819242 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.830883 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.844157 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.858928 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.871785 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.883512 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.909241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.923169 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.940497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.960292 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.977834 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.997343 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.017129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.034034 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.041260 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.041514 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.041493436 +0000 UTC m=+28.226470512 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142587 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142734 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142750 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142744 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142791 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142863 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142889 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142906 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142761 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142873 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.142827965 +0000 UTC m=+28.327805101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142977 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.142951989 +0000 UTC m=+28.327929065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142989 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.14298323 +0000 UTC m=+28.327960296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.143000 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.14299544 +0000 UTC m=+28.327972516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.372937 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f7zct"] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.373266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.375514 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376280 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376532 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376793 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.380879 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:26:21.646756686 +0000 UTC Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.393065 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.407102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.419264 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.419355 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.419422 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.419552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.422270 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.438903 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444503 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444572 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444598 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.455768 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.468959 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.496701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.514451 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.528354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545833 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545954 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.546820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.548897 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.565817 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.573148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.590117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.610869 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.612757 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b" exitCode=0 Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.612877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b"} Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.642407 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.667315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.679295 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.690085 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.696075 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.713113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.731065 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.739395 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.755102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.755394 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.759175 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.771737 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.784555 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.797022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.808631 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.826640 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.838550 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.852987 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.864108 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.875518 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.888914 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.901846 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.918925 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.931211 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.946855 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.959467 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.975486 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.987476 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.009036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.034524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.066817 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.381458 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:18:05.640740839 +0000 UTC Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.418577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:51 crc kubenswrapper[4705]: E0216 14:53:51.418706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.621621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.623825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f7zct" event={"ID":"e35c89f5-2045-4451-b301-44615b5f73e6","Type":"ContainerStarted","Data":"d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.623851 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f7zct" event={"ID":"e35c89f5-2045-4451-b301-44615b5f73e6","Type":"ContainerStarted","Data":"ccdf87c848f97940099a55a97f506c8acd18cd36e08a6f4487c5e1d6d910b067"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.627425 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f" exitCode=0 Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.627490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.643701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.665809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.690866 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.716724 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.730429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.744505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.755815 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.771004 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.785457 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.800228 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.811042 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.823480 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.835650 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.852036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.865676 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.880216 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.895960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.917886 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.945003 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.961963 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.974743 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.994410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.008675 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.026850 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.065749 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.108111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.147701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.188868 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.270131 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271860 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271951 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.278299 4705 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.278594 4705 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279491 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.303380 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.322676 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.345262 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.350015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.350025 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.361882 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.377928 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.378234 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380231 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.381948 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:13:31.885527744 +0000 UTC Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.421620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.422002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.422212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.422710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587400 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.637871 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2" exitCode=0 Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.637950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.654129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.671982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691505 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.707826 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.728462 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.750104 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.765399 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.785143 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794265 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.803603 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.818814 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.843565 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.864686 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897128 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897933 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.926884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.945129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001177 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207411 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310288 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.382213 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:01:08.368906964 +0000 UTC Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413676 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.418777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:53 crc kubenswrapper[4705]: E0216 14:53:53.418940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.429830 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.433396 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.440109 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.444201 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.459562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.480836 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.495361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.509981 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.533532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.559851 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.575499 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.592924 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.607759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619543 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.620785 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.639923 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.646245 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b" exitCode=0 Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.647410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.662357 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.682877 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.700170 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.716982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722259 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.737600 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.773071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.790203 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.807669 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.814315 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.822474 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824640 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824683 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.839159 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.857206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.874317 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.898071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.914949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932203 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.933413 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.960326 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.994924 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.035038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.035097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.088508 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.088745 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.088707135 +0000 UTC m=+36.273684211 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189845 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189917 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189952 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190130 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190195 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190215 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190230 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190138 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190297 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190314 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190264 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190235788 +0000 UTC m=+36.375212904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190408 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190357971 +0000 UTC m=+36.375335057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190446 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190433923 +0000 UTC m=+36.375411009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190714 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190826 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190799223 +0000 UTC m=+36.375776299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241500 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.383884 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:37:24.596521098 +0000 UTC Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.418639 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.418818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.418952 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.419235 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448726 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550750 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.655654 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerStarted","Data":"a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.661180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.662234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.662292 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.682437 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.740024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.740919 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.743507 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.766664 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.786656 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.811869 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.833606 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.856707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860457 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.883428 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.920095 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.936288 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.948558 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.959660 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964237 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.982073 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.002780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.025100 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.048444 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068159 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.070016 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.090621 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.106179 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.133203 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.159292 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171846 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171861 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.178932 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.207313 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.222809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.249625 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.265505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.275015 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.281283 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.303204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.315694 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.339549 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.353831 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378111 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.384258 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:24:26.771660037 +0000 UTC Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.419204 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:55 crc kubenswrapper[4705]: E0216 14:53:55.419645 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481162 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.664777 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689422 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792843 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896305 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000247 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103566 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206695 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.309770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.384561 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:47:17.551910855 +0000 UTC Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.410659 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414813 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.418627 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.418636 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:56 crc kubenswrapper[4705]: E0216 14:53:56.418886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:56 crc kubenswrapper[4705]: E0216 14:53:56.419023 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.434441 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.449042 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.471535 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.493040 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519795 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.540052 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.571288 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.598333 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.618604 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623208 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.633707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.654708 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.670945 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.673107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.692942 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.707015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.721460 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725566 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725651 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.740106 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828797 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.137916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.137997 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138067 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241696 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.385923 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:46:23.639112761 +0000 UTC Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.418818 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:57 crc kubenswrapper[4705]: E0216 14:53:57.419071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553307 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553356 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657796 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.677905 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.682504 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" exitCode=1 Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.682584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.683909 4705 scope.go:117] "RemoveContainer" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.706141 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.723339 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.751040 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.761930 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762068 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.788609 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.813699 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.837326 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866518 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.874462 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.899762 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.918675 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.942630 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.960354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.979274 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.998034 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.020088 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.043780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072815 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176243 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279440 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382304 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.386281 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:54:55.078840589 +0000 UTC Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.418693 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.418854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:58 crc kubenswrapper[4705]: E0216 14:53:58.418978 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:58 crc kubenswrapper[4705]: E0216 14:53:58.419155 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485796 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589216 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691594 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691935 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.697693 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.697919 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.736426 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.760957 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.781340 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795816 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.807284 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.824855 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.842117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.864022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.877323 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899252 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.900361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.921497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.945558 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.965509 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.991560 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.013246 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.034335 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106229 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313215 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313312 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.387424 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:40:26.386103975 +0000 UTC Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.418602 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:59 crc kubenswrapper[4705]: E0216 14:53:59.418822 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.705034 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.706336 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710731 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" exitCode=1 Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710930 4705 scope.go:117] "RemoveContainer" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.711762 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:53:59 crc kubenswrapper[4705]: E0216 14:53:59.712027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.737786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.773864 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.797351 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832977 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.833729 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.852889 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.872232 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.890066 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.902965 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.917693 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.931621 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.958450 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.976577 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.998304 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.018331 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038931 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.040187 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142441 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.246003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349216 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.388237 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:12:47.845996548 +0000 UTC Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.418693 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.418783 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:00 crc kubenswrapper[4705]: E0216 14:54:00.418909 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:00 crc kubenswrapper[4705]: E0216 14:54:00.419076 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.441359 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66"] Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.442296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.445266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.446781 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.465291 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.491225 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.510411 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.530773 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.551126 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.572980 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.608283 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.632048 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.654056 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663356 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.675885 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.677702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.677728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.686725 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.695809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.708130 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.721269 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.728397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.751636 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.765428 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.767067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.767163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.780261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: W0216 14:54:00.792449 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33860ee2_697c_4950_af95_26d7916c0a4f.slice/crio-46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b WatchSource:0}: Error finding container 46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b: Status 404 returned error can't find the container with id 46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.801311 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.825593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.844015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872140 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976734 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.081574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185646 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.388802 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:15:57.317519687 +0000 UTC Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.419338 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.419524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451699 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.620727 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.622694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.622829 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.644745 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660352 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.670126 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.688186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.688357 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.690936 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.708547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.723841 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.737831 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.738161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.738308 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.748551 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.766111 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.772951 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.790023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.790157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.790339 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.790482 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.290450474 +0000 UTC m=+36.475427550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.798465 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.816105 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.825410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.846668 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.857533 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.871002 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.875536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.888845 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.909614 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.924716 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.939354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.956394 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974837 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.975122 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.002010 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.037737 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.057757 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.074424 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079274 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.090168 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.093541 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.093770 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.093730136 +0000 UTC m=+52.278707222 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.105097 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.122256 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.137939 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.151953 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.167680 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181731 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.185852 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195220 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195297 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195513 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195522 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195579 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195604 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195625 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195650 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195618839 +0000 UTC m=+52.380595955 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195540 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195691 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.19566665 +0000 UTC m=+52.380643766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195700 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195718 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195705641 +0000 UTC m=+52.380682757 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195719 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195802 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195777033 +0000 UTC m=+52.380754329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.206554 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.219909 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.237545 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.253087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.277636 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.297074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.297299 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.297460 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:03.29743004 +0000 UTC m=+37.482407146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.389042 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:22:24.988244472 +0000 UTC Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.390957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.418872 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.418973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.419126 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.419221 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495943 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702458 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.766960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767043 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767054 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.784798 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789526 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.811201 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817608 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.836487 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.877653 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883679 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.913930 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.914116 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916483 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019443 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123201 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.310218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.310548 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.310677 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:05.310646734 +0000 UTC m=+39.495623850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.329939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.390157 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:03:27.74052768 +0000 UTC Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.418883 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.418912 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.419101 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.419257 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433446 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433545 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536501 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.639897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.639976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640056 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.744007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.744024 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848480 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952278 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055857 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055911 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159781 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368117 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368268 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.390674 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:01:42.851302112 +0000 UTC Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.419186 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.419244 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:04 crc kubenswrapper[4705]: E0216 14:54:04.419447 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:04 crc kubenswrapper[4705]: E0216 14:54:04.419663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575458 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.624317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.647781 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.669308 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678199 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.696794 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.720328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.744400 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.778061 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781580 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.814022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.834449 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.856102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.871996 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884277 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.887964 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.905996 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.924707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.946940 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.965593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987991 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.989440 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.008154 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091823 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195297 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298363 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.336946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.337196 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.337309 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:09.337280479 +0000 UTC m=+43.522257585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.391363 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:36:13.613953876 +0000 UTC Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402876 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.419228 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.419265 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.419568 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.419805 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.506943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507087 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.610871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714754 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714772 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.745580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.747252 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.747632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.769948 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.790067 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.806296 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.818941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.827136 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.852863 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.875799 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.910917 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923421 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.946173 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.971043 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.992877 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.009867 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.027870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.027980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.032241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.052614 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.076117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.095106 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.111875 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.129652 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132464 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236385 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.339991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340181 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.392440 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:09:12.296361742 +0000 UTC Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.418990 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.419020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:06 crc kubenswrapper[4705]: E0216 14:54:06.419153 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:06 crc kubenswrapper[4705]: E0216 14:54:06.419405 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443306 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.445264 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.464414 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.485780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.507313 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.540455 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.546024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.546100 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.564050 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.598105 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.620315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.641466 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650154 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.676306 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.698613 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.719992 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.737249 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753850 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.756978 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.777056 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.796024 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.816891 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856479 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063695 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167720 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271761 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374532 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.392906 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:33:51.89743431 +0000 UTC Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.418798 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.418866 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:07 crc kubenswrapper[4705]: E0216 14:54:07.419064 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:07 crc kubenswrapper[4705]: E0216 14:54:07.419264 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477672 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581582 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685471 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893681 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997724 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.327003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.393149 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:21:16.388569613 +0000 UTC Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.418992 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.419048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:08 crc kubenswrapper[4705]: E0216 14:54:08.419212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:08 crc kubenswrapper[4705]: E0216 14:54:08.420052 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430224 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534076 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534251 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638230 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638255 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845729 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948548 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051659 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154728 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154748 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257862 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.258015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.258033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361296 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.389129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.389344 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.389456 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:17.389432874 +0000 UTC m=+51.574409960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.394004 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:52:01.485510686 +0000 UTC Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.418322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.418425 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.418532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.418684 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464217 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568179 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568245 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672333 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.775960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879722 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983501 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190690 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294998 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.394795 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:22:43.873095223 +0000 UTC Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.419407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:10 crc kubenswrapper[4705]: E0216 14:54:10.419610 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.419717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:10 crc kubenswrapper[4705]: E0216 14:54:10.419864 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502423 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605445 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709441 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812567 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.916976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917152 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124704 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124725 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227744 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.331845 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332098 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332119 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.396023 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:01:47.165547959 +0000 UTC Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.418819 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.418937 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:11 crc kubenswrapper[4705]: E0216 14:54:11.419168 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:11 crc kubenswrapper[4705]: E0216 14:54:11.419335 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436415 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642274 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.852798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.852998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853083 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957365 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061529 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061590 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268769 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373577 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373634 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373665 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.396496 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:22:41.37038067 +0000 UTC Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.419163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:12 crc kubenswrapper[4705]: E0216 14:54:12.419481 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.419189 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:12 crc kubenswrapper[4705]: E0216 14:54:12.420142 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478635 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582740 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686544 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790265 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893848 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997820 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997841 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.100991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101118 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204350 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232292 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.253881 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259503 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.280728 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.306031 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310965 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.331176 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336969 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.359325 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.359500 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.396728 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:08:34.308942868 +0000 UTC Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.419149 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.419257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.419358 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.419504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466562 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569613 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776272 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776305 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878827 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.981964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982062 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.084966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085120 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188204 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291462 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394094 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.397416 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:13:28.274190396 +0000 UTC Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.418870 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:14 crc kubenswrapper[4705]: E0216 14:54:14.419031 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.419288 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:14 crc kubenswrapper[4705]: E0216 14:54:14.419445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496658 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.599855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600755 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704153 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807416 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912355 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015926 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119378 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222957 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326727 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.398173 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:22:08.390671936 +0000 UTC Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.418757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:15 crc kubenswrapper[4705]: E0216 14:54:15.418901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.418763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:15 crc kubenswrapper[4705]: E0216 14:54:15.419079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429357 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532353 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.635689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.739851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740496 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844832 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948229 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051520 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153793 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258491 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361721 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.398320 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:40:46.316014484 +0000 UTC Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.418817 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.418921 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:16 crc kubenswrapper[4705]: E0216 14:54:16.419025 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:16 crc kubenswrapper[4705]: E0216 14:54:16.419169 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.442272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464830 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.490698 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.515673 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.549886 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567965 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.575547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.602835 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.618572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.637120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.649312 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.662876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.669959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670052 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.671244 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.680559 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.683516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.700767 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.724235 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.743532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.765882 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773205 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.787440 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.805045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.829969 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.848282 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.862035 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.875993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876121 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.879087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.894678 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.910620 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.931346 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.951468 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.969222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.987884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.009943 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.033429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.051054 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.076043 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.080984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081057 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.093537 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.108430 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.129502 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183859 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183874 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.388899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.388982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.399415 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:27:56.501943159 +0000 UTC Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.418971 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.419026 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.419201 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.419366 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.420034 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.482992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.483198 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.483300 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:33.48327381 +0000 UTC m=+67.668250916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492472 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492502 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.595982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596136 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800948 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.801007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.801030 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.806697 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.811007 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.811700 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.829113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.840687 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.850638 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.862842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.876903 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.889775 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903833 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.914222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.945589 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.964526 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.983167 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.999029 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006274 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006286 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006315 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.017117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.031615 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.045463 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.068063 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.083970 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.098305 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109290 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109306 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.153982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.192184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.192328 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.192301826 +0000 UTC m=+84.377278902 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212081 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294099 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294096 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294143 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294192 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294174259 +0000 UTC m=+84.479151335 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294232 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.29421106 +0000 UTC m=+84.479188146 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294154 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294255 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294268 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294300 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294291252 +0000 UTC m=+84.479268328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294113 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294325 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294348 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294342394 +0000 UTC m=+84.479319460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.399645 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:35:55.921308629 +0000 UTC Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416139 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.418579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.418636 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.418693 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.418915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.621953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622044 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726268 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.817834 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.819011 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823880 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" exitCode=1 Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823999 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.825570 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.825939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829706 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.854190 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.875220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.910680 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.930485 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932850 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.965983 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.981060 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.001828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.019923 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036275 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036954 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.056607 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.073761 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.089629 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.108263 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.126484 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.138577 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139166 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.150881 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.161693 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.174415 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241723 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344147 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.400638 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:26:03.84875393 +0000 UTC Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.419316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.419322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.419505 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.419593 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549755 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755560 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.833897 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.837638 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.837778 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.859324 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.870584 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.883937 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.913111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.932318 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.949472 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962091 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.971794 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.992247 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.014812 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.033747 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.055111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065645 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.073906 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.089133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.107263 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.126617 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.142580 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.155828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167823 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.182759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.270914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.270999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271105 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.401715 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:07:11.534529006 +0000 UTC Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.419552 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:20 crc kubenswrapper[4705]: E0216 14:54:20.419789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.419823 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:20 crc kubenswrapper[4705]: E0216 14:54:20.420166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477756 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580658 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683187 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.787009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890212 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097525 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.200939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305002 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305186 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.402836 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:33:47.034260063 +0000 UTC Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408679 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.419163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.419224 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:21 crc kubenswrapper[4705]: E0216 14:54:21.419499 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:21 crc kubenswrapper[4705]: E0216 14:54:21.419709 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511782 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615739 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719213 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823500 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927588 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031255 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134738 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134860 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237218 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339829 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.403980 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:17:04.750099784 +0000 UTC Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.419463 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.419475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:22 crc kubenswrapper[4705]: E0216 14:54:22.419603 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:22 crc kubenswrapper[4705]: E0216 14:54:22.419710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442666 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546276 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649912 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753439 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855428 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060817 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060829 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163402 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266325 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.370007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.370028 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.404359 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:47:12.463928197 +0000 UTC Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.418757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.418836 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.418911 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.419045 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473939 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577362 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673621 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.693698 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698816 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.718528 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722833 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.741569 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.759911 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764691 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.778826 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.779067 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781914 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988311 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091651 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400517 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.404825 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:20:19.6378647 +0000 UTC Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.419293 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.419454 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:24 crc kubenswrapper[4705]: E0216 14:54:24.419575 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:24 crc kubenswrapper[4705]: E0216 14:54:24.419699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504157 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607234 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710344 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813748 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813867 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917537 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917579 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020286 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020439 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.124000 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227496 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.405790 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:00:23.56200549 +0000 UTC Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.419230 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:25 crc kubenswrapper[4705]: E0216 14:54:25.419475 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:25 crc kubenswrapper[4705]: E0216 14:54:25.419536 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.432979 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433089 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536581 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639267 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742818 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846272 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846304 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949633 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052828 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052844 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.155964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156088 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361817 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.406350 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:12:08.788241218 +0000 UTC Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.418876 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.418927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:26 crc kubenswrapper[4705]: E0216 14:54:26.419127 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:26 crc kubenswrapper[4705]: E0216 14:54:26.419283 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.452395 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465134 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.475430 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.494157 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.535589 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.563099 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567635 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.580170 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.607925 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.624911 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.644910 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.655806 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.669941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.669987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670049 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.677487 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.694597 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.712570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.737777 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.763335 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773917 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.787733 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.810339 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.834694 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876937 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.877017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.877033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980897 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187278 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393270 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.407754 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:38:44.353861583 +0000 UTC Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.419115 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.419152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:27 crc kubenswrapper[4705]: E0216 14:54:27.419260 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:27 crc kubenswrapper[4705]: E0216 14:54:27.419460 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495687 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598995 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.701849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.701951 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702009 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805862 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909428 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117548 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117584 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.221928 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.221991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222059 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325438 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.408748 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:59:30.123914105 +0000 UTC Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.419153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.419260 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:28 crc kubenswrapper[4705]: E0216 14:54:28.419450 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:28 crc kubenswrapper[4705]: E0216 14:54:28.419623 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428357 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.530959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531081 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634393 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737419 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737512 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.944954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048219 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151299 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151407 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254318 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356979 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.357012 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.409774 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:59:38.370027779 +0000 UTC Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.419319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.419319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:29 crc kubenswrapper[4705]: E0216 14:54:29.419522 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:29 crc kubenswrapper[4705]: E0216 14:54:29.419758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460595 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563859 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563931 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563949 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666631 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770189 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872767 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976250 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078971 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183846 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286529 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286603 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389116 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.410941 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:42:54.197311221 +0000 UTC Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:30 crc kubenswrapper[4705]: E0216 14:54:30.419537 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:30 crc kubenswrapper[4705]: E0216 14:54:30.419597 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491113 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593485 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593546 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593555 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695294 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695449 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797414 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797446 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797484 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898643 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000602 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102857 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205842 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411170 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411083 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:46:09.034383644 +0000 UTC Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.418332 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.418407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:31 crc kubenswrapper[4705]: E0216 14:54:31.418544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:31 crc kubenswrapper[4705]: E0216 14:54:31.418748 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514358 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618457 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721860 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721971 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721991 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825863 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035804 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138523 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240637 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.343001 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.411483 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:58:15.453429828 +0000 UTC Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.418824 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.418849 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.418993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.419120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.419823 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.419993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445576 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547313 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649810 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752191 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855147 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957840 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059828 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162737 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264999 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367248 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.411759 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:11:00.112468447 +0000 UTC Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.419066 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.419075 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.419217 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.419293 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.470000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.470014 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.559764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.559981 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.560071 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:05.560045232 +0000 UTC m=+99.745022348 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.573003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675327 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778616 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881256 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.908437 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.928263 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932648 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.947110 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.952950 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953057 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.968990 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973335 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.989980 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.990220 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992283 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094732 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197342 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299312 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299345 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.401939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.401986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402034 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.412219 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:42:00.193167995 +0000 UTC Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.418590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.418590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:34 crc kubenswrapper[4705]: E0216 14:54:34.418782 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:34 crc kubenswrapper[4705]: E0216 14:54:34.418699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504756 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.607771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608541 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608708 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711493 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711529 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.813987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814069 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889509 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889562 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" exitCode=1 Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889593 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889949 4705 scope.go:117] "RemoveContainer" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.902207 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917904 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.931673 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.949759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.961921 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.982865 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.997201 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.009999 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020236 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.022601 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.037261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.051095 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.061846 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.074849 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.090251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.102044 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.113738 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122956 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.126348 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.141566 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.155181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225415 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327945 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.412880 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:56:30.909250174 +0000 UTC Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.419301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.419301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:35 crc kubenswrapper[4705]: E0216 14:54:35.419532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:35 crc kubenswrapper[4705]: E0216 14:54:35.419642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534107 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636876 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636943 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.739671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.739935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740155 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.842856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843387 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.893701 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.893954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.918047 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.935532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945271 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.951564 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.973857 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.990894 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.005068 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.020245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.039624 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.055452 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.065876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.080277 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.093592 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.103523 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.114875 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.131120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.152025 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.169977 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.182536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252809 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252834 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.413480 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:54:59.285337888 +0000 UTC Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.418860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.418944 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:36 crc kubenswrapper[4705]: E0216 14:54:36.419037 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:36 crc kubenswrapper[4705]: E0216 14:54:36.419219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.433818 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.446752 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460308 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460563 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.477775 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.493232 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.508481 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.527571 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.553091 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563364 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.568949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.583129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.611193 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.626088 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.640532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.652562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.661181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665613 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.672496 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.684393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.696236 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768405 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871466 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974794 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179440 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281502 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281536 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.413857 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:59:12.772549505 +0000 UTC Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.419103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.419119 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:37 crc kubenswrapper[4705]: E0216 14:54:37.419202 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:37 crc kubenswrapper[4705]: E0216 14:54:37.419284 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485926 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.618994 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721944 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926483 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131080 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131125 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.233945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.233993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336442 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.414497 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:24:30.356675338 +0000 UTC Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.418927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.419084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:38 crc kubenswrapper[4705]: E0216 14:54:38.419242 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:38 crc kubenswrapper[4705]: E0216 14:54:38.419443 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438842 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438876 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540807 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.744996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745061 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950598 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053537 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053639 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156733 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259492 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362396 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.415065 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:11:23.231542568 +0000 UTC Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.418416 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.418499 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:39 crc kubenswrapper[4705]: E0216 14:54:39.418545 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:39 crc kubenswrapper[4705]: E0216 14:54:39.418773 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465131 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465176 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567564 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670175 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774467 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877528 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877631 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.980959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981046 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084363 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.186992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187072 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288763 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391529 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.416084 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:38:59.694745724 +0000 UTC Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.418483 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.418592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:40 crc kubenswrapper[4705]: E0216 14:54:40.418670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:40 crc kubenswrapper[4705]: E0216 14:54:40.418719 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596138 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.698961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699096 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802155 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904713 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007381 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007466 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211858 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315185 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.416230 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:35:31.243981217 +0000 UTC Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.417983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418071 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418337 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:41 crc kubenswrapper[4705]: E0216 14:54:41.418532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:41 crc kubenswrapper[4705]: E0216 14:54:41.418612 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520164 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623701 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726223 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829294 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829449 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931847 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033828 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137496 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242174 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345897 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.416968 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:25:27.783402991 +0000 UTC Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.419311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.419421 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:42 crc kubenswrapper[4705]: E0216 14:54:42.419474 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:42 crc kubenswrapper[4705]: E0216 14:54:42.419571 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448485 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448532 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551640 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654125 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654240 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756633 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860593 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963664 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067150 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170514 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.274884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378137 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.417909 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:13:35.651529731 +0000 UTC Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.419231 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.419356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:43 crc kubenswrapper[4705]: E0216 14:54:43.419565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:43 crc kubenswrapper[4705]: E0216 14:54:43.419898 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.420202 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481721 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585488 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585505 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688273 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893742 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.926092 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.928525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.929007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.943752 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.957155 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.969512 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.980245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.991429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.995839 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996207 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.005350 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.017842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.031648 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.044420 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.049501 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052507 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.058955 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.064839 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068841 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.075300 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.081076 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084624 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.089663 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.100263 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105383 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105587 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.107644 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.117584 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.118005 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.120060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.124178 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.141553 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.154895 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.190884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.208968 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222737 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418435 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418559 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:11:12.10023925 +0000 UTC Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.418656 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418832 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.418995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427741 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427785 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530481 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530526 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.632954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633495 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.735995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736118 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839896 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.934507 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.935334 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938445 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" exitCode=1 Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938501 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938551 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.939723 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.940046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945464 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.960611 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.983251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.999325 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.014664 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049339 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.057872 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.089156 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.107563 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.118220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.129406 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.141659 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151916 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.154906 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.170690 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.185860 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.204124 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.220018 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.235190 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254959 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.256498 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.269036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358545 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419074 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:28:55.092645435 +0000 UTC Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419287 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.419687 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.419541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462129 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.564952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565085 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668131 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668148 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770828 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770928 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873890 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.943101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.946976 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.947147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.967162 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.986362 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.002217 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.019361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.036572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.057093 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.074549 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080623 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.105139 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.124270 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.145188 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.164363 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.179449 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183433 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.199837 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.222410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.237486 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.252328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.266847 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.277872 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387987 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.418744 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.418867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:46 crc kubenswrapper[4705]: E0216 14:54:46.418958 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:46 crc kubenswrapper[4705]: E0216 14:54:46.419067 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.419252 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:23:40.150111069 +0000 UTC Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.440111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.455255 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.469241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.486361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.492031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.502424 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.515547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.536393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.563432 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.581068 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595748 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.597593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.615587 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.633245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.653848 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.673760 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.698776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699535 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.706719 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.725055 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.745960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.777224 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802627 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906311 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009178 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.215977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216076 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216132 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319259 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:47 crc kubenswrapper[4705]: E0216 14:54:47.419594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:47 crc kubenswrapper[4705]: E0216 14:54:47.419736 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419716 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:20:11.01827973 +0000 UTC Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422248 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525637 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525774 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628597 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628648 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731307 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731344 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834194 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.937005 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039979 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142504 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245560 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.348922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349044 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349061 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.418833 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.418834 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:48 crc kubenswrapper[4705]: E0216 14:54:48.419020 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:48 crc kubenswrapper[4705]: E0216 14:54:48.419252 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.421072 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:15:45.803614996 +0000 UTC Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451814 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451940 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.554927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.554999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555064 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658585 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762107 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966477 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068588 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171316 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273775 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376360 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.418836 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:49 crc kubenswrapper[4705]: E0216 14:54:49.418979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.418850 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:49 crc kubenswrapper[4705]: E0216 14:54:49.419125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.422017 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:31:31.293416382 +0000 UTC Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.479021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.479040 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.683942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684071 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787587 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890279 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890417 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993610 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096583 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200167 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.265503 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.265710 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.265675221 +0000 UTC m=+148.450652327 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303267 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367322 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367501 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367532 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367659 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367706 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367737 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367548 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367534753 +0000 UTC m=+148.552511819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367669 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367873 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367854302 +0000 UTC m=+148.552831388 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367886 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367906 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367965 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367944834 +0000 UTC m=+148.552921990 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.368009 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.368000186 +0000 UTC m=+148.552977372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406937 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.418415 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.418430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.418650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.418862 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.422511 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:07:31.343761565 +0000 UTC Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.612967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924287 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027620 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235704 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.337909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.337990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338073 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.419558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.419630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:51 crc kubenswrapper[4705]: E0216 14:54:51.419935 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:51 crc kubenswrapper[4705]: E0216 14:54:51.420107 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.423571 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:46:33.137542328 +0000 UTC Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440794 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544482 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648761 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752425 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.855972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856104 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959366 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959420 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.061944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.061990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266663 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369240 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.419191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.419206 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:52 crc kubenswrapper[4705]: E0216 14:54:52.419442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:52 crc kubenswrapper[4705]: E0216 14:54:52.419580 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.423723 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:38:35.268963503 +0000 UTC Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472522 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580299 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684779 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684887 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891525 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095150 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196820 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196974 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.298923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.298984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299045 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401176 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.418496 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.418538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:53 crc kubenswrapper[4705]: E0216 14:54:53.418609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:53 crc kubenswrapper[4705]: E0216 14:54:53.418685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.424706 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:00:19.884701724 +0000 UTC Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503994 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607174 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.709975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710114 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812854 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915223 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.017938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149342 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.166937 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.170985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171052 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.187565 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.190965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191039 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.207118 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210385 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210419 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.223177 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226848 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.238706 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.238841 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240139 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.343009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.418452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.418462 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.418621 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.418837 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.425153 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:51:49.841610125 +0000 UTC Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445997 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549413 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651657 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754831 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754862 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857924 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961289 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063922 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166426 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269179 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269242 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372313 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.419272 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.419266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:55 crc kubenswrapper[4705]: E0216 14:54:55.419611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:55 crc kubenswrapper[4705]: E0216 14:54:55.419706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.425934 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:51:27.415029713 +0000 UTC Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475827 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681823 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681839 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681849 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784054 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784156 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886993 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989801 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093214 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196310 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.419054 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:56 crc kubenswrapper[4705]: E0216 14:54:56.419234 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.419054 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:56 crc kubenswrapper[4705]: E0216 14:54:56.419666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.426723 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:09:04.606227906 +0000 UTC Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.445220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.466758 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.488038 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507743 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.517728 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.544315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.565586 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.583003 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.596801 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610622 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.613074 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.629733 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.650137 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.670235 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.691328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.710864 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715599 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.733572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.763592 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.789038 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.802706 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818762 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922143 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.024853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025847 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.128772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129157 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232271 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335930 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.419388 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.419411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:57 crc kubenswrapper[4705]: E0216 14:54:57.419966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:57 crc kubenswrapper[4705]: E0216 14:54:57.419831 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.428324 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:35:02.581104608 +0000 UTC Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438542 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540467 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540479 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.644137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.644291 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852692 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955552 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955674 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.058927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.059638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.059856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.060026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.060170 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164270 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266976 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370672 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.419024 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:58 crc kubenswrapper[4705]: E0216 14:54:58.419212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:58 crc kubenswrapper[4705]: E0216 14:54:58.419717 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.429575 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:42:52.831844725 +0000 UTC Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.437655 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.473942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.473989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577801 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.681009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784292 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.887653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888520 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.991982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992098 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.094737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.094980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095181 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198183 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.300964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.403984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404108 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:59 crc kubenswrapper[4705]: E0216 14:54:59.419365 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.419243 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:59 crc kubenswrapper[4705]: E0216 14:54:59.419808 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.430565 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:42:18.909706534 +0000 UTC Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506689 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.610005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.610018 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817640 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920824 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125796 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125807 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228503 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331399 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.418623 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.418640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.418999 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.419698 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.419958 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.420200 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.431498 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:47:45.645766575 +0000 UTC Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536128 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639299 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.742939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846958 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.847018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.847045 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950169 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.052819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053706 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.156921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.156996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259963 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363908 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.418528 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.418547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:01 crc kubenswrapper[4705]: E0216 14:55:01.419199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:01 crc kubenswrapper[4705]: E0216 14:55:01.419533 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.431816 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:41:23.248398244 +0000 UTC Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.570942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674723 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778493 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778509 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778551 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881644 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984739 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088670 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192841 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.193011 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.296911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.296986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297056 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400765 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.419302 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.419414 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:02 crc kubenswrapper[4705]: E0216 14:55:02.419611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:02 crc kubenswrapper[4705]: E0216 14:55:02.419784 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.432919 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:10:00.230677008 +0000 UTC Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504461 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709044 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709138 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812682 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.917020 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020068 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020125 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020187 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124280 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227325 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227508 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330830 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330974 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.419048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.419162 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:03 crc kubenswrapper[4705]: E0216 14:55:03.419350 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:03 crc kubenswrapper[4705]: E0216 14:55:03.419945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.433084 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:00:08.450216115 +0000 UTC Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434584 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538469 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641488 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641546 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744493 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.848011 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951601 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054776 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262913 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366118 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.419021 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.419051 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:04 crc kubenswrapper[4705]: E0216 14:55:04.419163 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:04 crc kubenswrapper[4705]: E0216 14:55:04.419288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.433598 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:04:48.615423322 +0000 UTC Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469168 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572260 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586433 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.655034 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9"] Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.655640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.659946 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.660488 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.660712 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.661209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.718161 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2ljf7" podStartSLOduration=77.718120725 podStartE2EDuration="1m17.718120725s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.71683325 +0000 UTC m=+98.901810346" watchObservedRunningTime="2026-02-16 14:55:04.718120725 +0000 UTC m=+98.903097831" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734137 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734207 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734248 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734301 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.759609 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" podStartSLOduration=77.759590056 podStartE2EDuration="1m17.759590056s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.735408051 +0000 UTC m=+98.920385177" watchObservedRunningTime="2026-02-16 14:55:04.759590056 +0000 UTC m=+98.944567132" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.777853 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.777822858 podStartE2EDuration="1m11.777822858s" podCreationTimestamp="2026-02-16 14:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.75972956 +0000 UTC m=+98.944706656" watchObservedRunningTime="2026-02-16 14:55:04.777822858 +0000 UTC m=+98.962799934" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.799391 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" podStartSLOduration=77.79934011 podStartE2EDuration="1m17.79934011s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.799296479 +0000 UTC m=+98.984273575" watchObservedRunningTime="2026-02-16 14:55:04.79934011 +0000 UTC m=+98.984317186" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835087 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835171 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835217 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835243 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.836705 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.837601 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.844996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.859404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.914478 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.914452657 podStartE2EDuration="48.914452657s" podCreationTimestamp="2026-02-16 14:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.914139528 +0000 UTC m=+99.099116614" watchObservedRunningTime="2026-02-16 14:55:04.914452657 +0000 UTC m=+99.099429733" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.914628 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.914622041 podStartE2EDuration="6.914622041s" podCreationTimestamp="2026-02-16 14:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.899709281 +0000 UTC m=+99.084686397" watchObservedRunningTime="2026-02-16 14:55:04.914622041 +0000 UTC m=+99.099599117" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.951710 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=74.951684961 podStartE2EDuration="1m14.951684961s" podCreationTimestamp="2026-02-16 14:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.949665275 +0000 UTC m=+99.134642351" watchObservedRunningTime="2026-02-16 14:55:04.951684961 +0000 UTC m=+99.136662047" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.968319 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bflhj" podStartSLOduration=78.968288998 podStartE2EDuration="1m18.968288998s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.967020263 +0000 UTC m=+99.151997349" watchObservedRunningTime="2026-02-16 14:55:04.968288998 +0000 UTC m=+99.153266074" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.978812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.982911 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podStartSLOduration=77.98289354 podStartE2EDuration="1m17.98289354s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.982104608 +0000 UTC m=+99.167081684" watchObservedRunningTime="2026-02-16 14:55:04.98289354 +0000 UTC m=+99.167870616" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:04.999873 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f7zct" podStartSLOduration=78.999850566 podStartE2EDuration="1m18.999850566s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.999607869 +0000 UTC m=+99.184584945" watchObservedRunningTime="2026-02-16 14:55:04.999850566 +0000 UTC m=+99.184827642" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.020450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" event={"ID":"ef894106-ff89-4de4-8647-9e48b9e5cc87","Type":"ContainerStarted","Data":"d20a2193c566d721c691c1f410419cb5f015624e6ea03badf14643b0fac75d43"} Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.034868 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.034831338 podStartE2EDuration="1m19.034831338s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:05.034322814 +0000 UTC m=+99.219299890" watchObservedRunningTime="2026-02-16 14:55:05.034831338 +0000 UTC m=+99.219808454" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.419153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.419202 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.420256 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.420459 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.434803 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:57:50.143387239 +0000 UTC Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.434954 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.448740 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.643266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.643932 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.644281 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:56:09.644237083 +0000 UTC m=+163.829214209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.026539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" event={"ID":"ef894106-ff89-4de4-8647-9e48b9e5cc87","Type":"ContainerStarted","Data":"27a666e462e08046e0e9af84e427b12984703efc253673d45376df706cdbf47b"} Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.418673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.418673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:06 crc kubenswrapper[4705]: E0216 14:55:06.420942 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:06 crc kubenswrapper[4705]: E0216 14:55:06.421170 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:07 crc kubenswrapper[4705]: I0216 14:55:07.419571 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:07 crc kubenswrapper[4705]: I0216 14:55:07.419772 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:07 crc kubenswrapper[4705]: E0216 14:55:07.419955 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:07 crc kubenswrapper[4705]: E0216 14:55:07.420144 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:08 crc kubenswrapper[4705]: I0216 14:55:08.419365 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:08 crc kubenswrapper[4705]: I0216 14:55:08.419484 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:08 crc kubenswrapper[4705]: E0216 14:55:08.419668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:08 crc kubenswrapper[4705]: E0216 14:55:08.419854 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:09 crc kubenswrapper[4705]: I0216 14:55:09.419430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:09 crc kubenswrapper[4705]: I0216 14:55:09.419461 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:09 crc kubenswrapper[4705]: E0216 14:55:09.419635 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:09 crc kubenswrapper[4705]: E0216 14:55:09.419921 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:10 crc kubenswrapper[4705]: I0216 14:55:10.418322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:10 crc kubenswrapper[4705]: E0216 14:55:10.418454 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:10 crc kubenswrapper[4705]: I0216 14:55:10.418588 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:10 crc kubenswrapper[4705]: E0216 14:55:10.418769 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.419109 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.419113 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.419734 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.419982 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.420250 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.420792 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:55:12 crc kubenswrapper[4705]: I0216 14:55:12.419245 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:12 crc kubenswrapper[4705]: I0216 14:55:12.419308 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:12 crc kubenswrapper[4705]: E0216 14:55:12.419542 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:12 crc kubenswrapper[4705]: E0216 14:55:12.419607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:13 crc kubenswrapper[4705]: I0216 14:55:13.419051 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:13 crc kubenswrapper[4705]: I0216 14:55:13.419222 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:13 crc kubenswrapper[4705]: E0216 14:55:13.419601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:13 crc kubenswrapper[4705]: E0216 14:55:13.419903 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:14 crc kubenswrapper[4705]: I0216 14:55:14.418847 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:14 crc kubenswrapper[4705]: I0216 14:55:14.418854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:14 crc kubenswrapper[4705]: E0216 14:55:14.419056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:14 crc kubenswrapper[4705]: E0216 14:55:14.419267 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:15 crc kubenswrapper[4705]: I0216 14:55:15.418662 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:15 crc kubenswrapper[4705]: E0216 14:55:15.418835 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:15 crc kubenswrapper[4705]: I0216 14:55:15.419487 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:15 crc kubenswrapper[4705]: E0216 14:55:15.419649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:16 crc kubenswrapper[4705]: I0216 14:55:16.419105 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:16 crc kubenswrapper[4705]: I0216 14:55:16.419212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:16 crc kubenswrapper[4705]: E0216 14:55:16.421398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:16 crc kubenswrapper[4705]: E0216 14:55:16.421496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:17 crc kubenswrapper[4705]: I0216 14:55:17.419138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:17 crc kubenswrapper[4705]: I0216 14:55:17.419191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:17 crc kubenswrapper[4705]: E0216 14:55:17.419437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:17 crc kubenswrapper[4705]: E0216 14:55:17.419646 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:18 crc kubenswrapper[4705]: I0216 14:55:18.419359 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:18 crc kubenswrapper[4705]: E0216 14:55:18.419595 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:18 crc kubenswrapper[4705]: I0216 14:55:18.419921 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:18 crc kubenswrapper[4705]: E0216 14:55:18.420067 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:19 crc kubenswrapper[4705]: I0216 14:55:19.418619 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:19 crc kubenswrapper[4705]: I0216 14:55:19.418695 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:19 crc kubenswrapper[4705]: E0216 14:55:19.419242 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:19 crc kubenswrapper[4705]: E0216 14:55:19.419281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:20 crc kubenswrapper[4705]: I0216 14:55:20.418596 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:20 crc kubenswrapper[4705]: E0216 14:55:20.418705 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:20 crc kubenswrapper[4705]: I0216 14:55:20.419086 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:20 crc kubenswrapper[4705]: E0216 14:55:20.419431 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.091186 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092087 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092211 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" exitCode=1 Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105"} Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092341 4705 scope.go:117] "RemoveContainer" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092978 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.093288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.134519 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" podStartSLOduration=94.134497055 podStartE2EDuration="1m34.134497055s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:06.051458486 +0000 UTC m=+100.236435562" watchObservedRunningTime="2026-02-16 14:55:21.134497055 +0000 UTC m=+115.319474171" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.418680 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.419108 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.418724 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.419446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.103255 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.418566 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4705]: E0216 14:55:22.418708 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.419781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:22 crc kubenswrapper[4705]: E0216 14:55:22.420689 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:23 crc kubenswrapper[4705]: I0216 14:55:23.419173 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:23 crc kubenswrapper[4705]: I0216 14:55:23.419177 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:23 crc kubenswrapper[4705]: E0216 14:55:23.419950 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:23 crc kubenswrapper[4705]: E0216 14:55:23.420082 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.419442 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:24 crc kubenswrapper[4705]: E0216 14:55:24.419865 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.419981 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:24 crc kubenswrapper[4705]: E0216 14:55:24.420607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.421255 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.116711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.119419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.120035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.418423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.418551 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:25 crc kubenswrapper[4705]: E0216 14:55:25.418925 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:25 crc kubenswrapper[4705]: E0216 14:55:25.419220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.476015 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podStartSLOduration=98.475974281 podStartE2EDuration="1m38.475974281s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:25.180937384 +0000 UTC m=+119.365914530" watchObservedRunningTime="2026-02-16 14:55:25.475974281 +0000 UTC m=+119.660951407" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.477941 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.122579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.122906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.418677 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.418703 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.419901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.420019 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.447615 4705 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.545220 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:27 crc kubenswrapper[4705]: I0216 14:55:27.419265 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:27 crc kubenswrapper[4705]: E0216 14:55:27.419477 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.418994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.419060 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.418994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419582 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:29 crc kubenswrapper[4705]: I0216 14:55:29.418829 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:29 crc kubenswrapper[4705]: E0216 14:55:29.419254 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419700 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:31 crc kubenswrapper[4705]: I0216 14:55:31.418838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:31 crc kubenswrapper[4705]: E0216 14:55:31.419023 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:31 crc kubenswrapper[4705]: E0216 14:55:31.546841 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419017 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419610 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:33 crc kubenswrapper[4705]: I0216 14:55:33.419121 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:33 crc kubenswrapper[4705]: E0216 14:55:33.419715 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.418701 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418829 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418851 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.419160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.419401 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.419661 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.164766 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.164825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6"} Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.418474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:35 crc kubenswrapper[4705]: E0216 14:55:35.418678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.773321 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.418878 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.419017 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.419840 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.419848 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.420113 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.420398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.418633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.421777 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.423703 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418897 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422212 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422499 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422635 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422783 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.446902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.505928 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.506750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515287 4705 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515354 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515482 4705 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515509 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515570 4705 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515592 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515645 4705 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515663 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515778 4705 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515805 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.518344 4705 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.518423 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.524151 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.524762 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.525169 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.525677 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530078 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530569 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530946 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.531789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.532200 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.532518 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.539897 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540272 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540298 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540657 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540717 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540751 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541327 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541379 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541418 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541436 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541452 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541468 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541706 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541747 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.543888 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.544472 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.545196 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.550233 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.550518 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.551226 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.552467 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.552597 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.553116 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.556140 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.556656 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557056 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557225 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557470 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557612 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.558248 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559334 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559603 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559804 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559948 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.560364 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.596522 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597523 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597754 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597906 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597967 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.598963 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.600954 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.601192 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.604575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637169 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637206 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637674 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638081 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638247 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638563 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638714 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638877 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639083 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639386 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639490 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639654 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639977 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640018 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640533 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640583 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640760 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640859 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640936 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641002 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641063 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641140 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641164 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641281 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641759 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641870 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642274 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642418 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642581 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642781 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642906 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643292 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643430 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643536 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643643 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643986 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644503 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644786 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.648206 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.648458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642419 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649043 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649280 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649597 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649646 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649688 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649714 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649807 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649829 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649897 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649926 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649956 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650171 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650815 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649958 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650939 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651061 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651135 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651185 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651327 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651503 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651544 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651559 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651614 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651683 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651983 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.652728 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.652952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.654238 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.655402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.655541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.656107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.656567 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.657673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.658151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.659109 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.659456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.660291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.660349 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.662554 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.662626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663579 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664051 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664788 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.665062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.666830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.667287 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.667631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.668822 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.669381 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670001 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670115 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670258 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670434 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670603 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670690 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670762 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670846 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670915 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670987 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671017 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671062 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671112 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671152 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673908 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673363 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673446 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673772 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673848 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.675671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.678232 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.689827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.700441 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.726812 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.726901 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.727086 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.727985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.728160 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.728882 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729203 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729890 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.730605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.731122 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.732014 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.732072 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.734992 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.735949 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.738815 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739409 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739833 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739995 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740452 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740610 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740608 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741485 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741658 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.742464 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.742610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746185 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mw9hv"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746839 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746869 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747435 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747497 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747617 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.748242 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.748947 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.749345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.749786 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.750249 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752736 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752768 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752826 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753060 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753098 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753250 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753337 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753357 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.756623 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.758545 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.759202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.759621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.763282 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764208 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765938 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.766710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.766952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.767195 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.769011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.769614 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772543 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772563 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.773333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774426 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.775177 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774491 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776050 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776146 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776795 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776988 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777159 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777358 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777683 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.784999 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.788545 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.788875 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.789016 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.789702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.790261 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.792734 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.793446 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.793628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.794650 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.795885 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.797425 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.798337 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.800197 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.801350 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.802038 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.803240 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.803305 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.805033 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.806885 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.807320 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.808695 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.809994 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.811883 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.813196 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.814963 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.816223 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.817879 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.819334 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.821287 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.828162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.829772 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.831153 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.832508 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.832532 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.833723 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.835190 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.837042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.838845 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.841801 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.843665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.844239 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.845888 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.849835 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.851356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.852671 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.852860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.853957 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855595 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855731 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855969 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856071 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856354 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856928 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.857672 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.859159 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.859592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.861118 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.862172 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.862435 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-z5fgm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.863010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.863068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.864662 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.865821 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.867509 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.872120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.888974 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.897782 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.900148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.909655 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.920187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.926837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.928816 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.940604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.948555 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.984917 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.007203 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.008920 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.029139 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.049858 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.069275 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.090321 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.112738 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.129909 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.149652 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.170349 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.188679 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.207609 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.209080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.213226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"9caf92b69500d5cc6d3a32f4ddf3209698c5e4ee714ca8d888c90a4ff6454526"} Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.218220 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51cb62a1_dd06_4f6b_aa37_c824973a7df0.slice/crio-68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a WatchSource:0}: Error finding container 68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a: Status 404 returned error can't find the container with id 68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.228984 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.241717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.250560 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.269214 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.277445 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.299054 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.308513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.310099 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.328458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.349505 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.368841 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.390658 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.408923 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.428925 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.441686 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.447060 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.449421 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.456493 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2527e960_4f78_42fa_8204_72f3dcf0716d.slice/crio-b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084 WatchSource:0}: Error finding container b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084: Status 404 returned error can't find the container with id b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.459134 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.460155 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39fcf916_177a_4f6c_ab49_18f1595166de.slice/crio-f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78 WatchSource:0}: Error finding container f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78: Status 404 returned error can't find the container with id f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.490422 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.509025 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.530502 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.549564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.572011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.589165 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.609019 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.631591 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.650263 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.654848 4705 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.654938 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.154912539 +0000 UTC m=+141.339889655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.657603 4705 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.657671 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.157654064 +0000 UTC m=+141.342631170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.664601 4705 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.665485 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.165435968 +0000 UTC m=+141.350413044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync secret cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.675153 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.693141 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.710426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.730301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.748890 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.767630 4705 request.go:700] Waited for 1.017982825s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.769426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.790316 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.812312 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.821749 4705 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.830284 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.849934 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.870182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.890093 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.909578 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.929208 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.977079 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.977455 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.991245 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.992998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.015226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.036767 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.059392 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.068487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.069243 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.094273 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.110996 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.130108 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.133328 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.140766 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.150920 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.169098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186113 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.194842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.205467 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.209098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.220264 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerStarted","Data":"579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.220336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerStarted","Data":"68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.221580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.229785 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.233034 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s6knp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.233083 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.242639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.251487 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256057 4705 generic.go:334] "Generic (PLEG): container finished" podID="2527e960-4f78-42fa-8204-72f3dcf0716d" containerID="692d3707ea33fb649d005202d2ebd913e77097d96ec86cb5a63ce6196e5259d3" exitCode=0 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerDied","Data":"692d3707ea33fb649d005202d2ebd913e77097d96ec86cb5a63ce6196e5259d3"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256601 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269087 4705 generic.go:334] "Generic (PLEG): container finished" podID="39fcf916-177a-4f6c-ab49-18f1595166de" containerID="e55d174441d6122d0d3a1e89d72e520f8e1080f22c9f2d5770f831356e50f7a0" exitCode=0 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerDied","Data":"e55d174441d6122d0d3a1e89d72e520f8e1080f22c9f2d5770f831356e50f7a0"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269354 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerStarted","Data":"f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269548 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.270459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"c7137b75686886a2189707982495fb3bf51fcc38d424f5ef79b265dd9a39bd8e"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"e756c4b724ddab8c019210bcae3933c09a2ea55aae60ab47239c4c0aea5f92f5"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"17f3484a544b9becf17e49536a1c748b3b712622769205cfcdf2009c42454cba"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.278333 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"76a9bdf94ba068a2e488c1ecf60677bbaec9cdce6910c3989ff1891c823c35d5"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.278380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"acc214ce9ec02c559382ee8d9d0287780478587fbb57d31d1e449c09d665bec1"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.288932 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.310238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.321475 4705 csr.go:261] certificate signing request csr-w9snf is approved, waiting to be issued Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.328906 4705 csr.go:257] certificate signing request csr-w9snf is issued Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.334773 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.351402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.368610 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.384073 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.392163 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.409991 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.428616 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.432975 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.452660 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.463634 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod100a207c_bfcf_42aa_8233_f760df5a3888.slice/crio-fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7 WatchSource:0}: Error finding container fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7: Status 404 returned error can't find the container with id fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.469263 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.489312 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.511809 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.528951 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.551565 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.568940 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.577927 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.585076 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29292cac_8f57_4f0b_aeb5_b4b7db9b3e45.slice/crio-7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628 WatchSource:0}: Error finding container 7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628: Status 404 returned error can't find the container with id 7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.589217 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.609791 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.630864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.649613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.672180 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.693860 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.731962 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.736276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.753912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.760618 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606c1ccf_c94e_417d_852a_9cf7ed18c4f7.slice/crio-74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196 WatchSource:0}: Error finding container 74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196: Status 404 returned error can't find the container with id 74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.770614 4705 request.go:700] Waited for 1.913684677s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.787101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.790845 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.794782 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.819095 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823533 4705 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823658 4705 projected.go:194] Error preparing data for projected volume kube-api-access-x2k46 for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823836 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46 podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.323791726 +0000 UTC m=+142.508768802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2k46" (UniqueName: "kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.830571 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.850732 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.880774 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.890467 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.928911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.935909 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.940304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.950358 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.961256 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.988839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.990303 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997408 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997583 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997660 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997739 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997835 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997873 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997898 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998017 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998038 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998059 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998174 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998195 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998360 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998409 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998433 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998471 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998529 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.000029 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.500009303 +0000 UTC m=+142.684986479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.002803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.011743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.035770 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.041033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.054670 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100795 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101015 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101033 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101053 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101153 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101231 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101248 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101272 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101296 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101328 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101403 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101460 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101493 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101694 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101734 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101758 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101885 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101914 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101970 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101993 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102010 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102084 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102137 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102258 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102306 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102353 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102416 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102448 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102481 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102575 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102649 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102665 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102697 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102712 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102835 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102854 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.103004 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.602989987 +0000 UTC m=+142.787967063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.104648 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.105653 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.105925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.107279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.109131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.111189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.115437 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.116115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.118677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.119114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.126783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.128685 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.131133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.132361 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.137677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.137998 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138676 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.139114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.139746 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.142135 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.156120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.163430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.173142 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.186079 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.189310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205919 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205952 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206012 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206030 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206235 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206312 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206358 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206402 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206480 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206615 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206693 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206726 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206757 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206823 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206854 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206869 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206886 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.209137 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.209497 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.210932 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.211096 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.214141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.214734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215295 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215573 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216507 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216570 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.217551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.218907 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.218935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219018 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.219284 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.719268926 +0000 UTC m=+142.904246002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219538 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.223980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.225931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226977 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.227517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.228735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.229606 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.231177 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.232628 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.232993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.235038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.241996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.242993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243616 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243972 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.244223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.244397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.249409 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.265078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.270301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.281824 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.285124 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.296415 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.297064 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.310544 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.311049 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.81103368 +0000 UTC m=+142.996010756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.315567 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.317314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cdb8w" event={"ID":"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45","Type":"ContainerStarted","Data":"4e36b99e9e29733d3e20c6e7feda67be482fbf84a2e3657e13acc8a6ee781e4b"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.317352 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cdb8w" event={"ID":"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45","Type":"ContainerStarted","Data":"7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.318096 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.321799 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.324049 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.333237 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 14:50:47 +0000 UTC, rotation deadline is 2026-12-27 17:44:31.079497439 +0000 UTC Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.333263 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7538h48m42.746237058s for next certificate rotation Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.343609 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.343681 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.345790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.347436 4705 generic.go:334] "Generic (PLEG): container finished" podID="606c1ccf-c94e-417d-852a-9cf7ed18c4f7" containerID="4ac96bd6c779cc04d96091f3a59fa8fd73597afa72e91d30522f991e49fbd79d" exitCode=0 Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.348195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerDied","Data":"4ac96bd6c779cc04d96091f3a59fa8fd73597afa72e91d30522f991e49fbd79d"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.348230 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerStarted","Data":"74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.387339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerStarted","Data":"1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.387403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerStarted","Data":"fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.388243 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.408005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.424304 4705 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mqkpd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.424357 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437999 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.441106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.441296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.445088 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.945061367 +0000 UTC m=+143.130038443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.466305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.483832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.498285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.504128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.507543 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" event={"ID":"6b1ded37-3147-4b41-b460-63471eba80b3","Type":"ContainerStarted","Data":"8fc5ccb65ec92b21a649cfd4501f7ab1801321c49246ae0429f210b4cffc5e9c"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.507721 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" event={"ID":"6b1ded37-3147-4b41-b460-63471eba80b3","Type":"ContainerStarted","Data":"d9a12cba1f126afe8f1c77a1c17b3dbbceaebd1ec9d1bff2c60a93bfe828a599"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.501138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.510926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.513611 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerStarted","Data":"3e55ff93237fb9ad1ed5d623118e2f22f1d1f290d65f79dd684335c8e696e49a"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.519761 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.522246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.525299 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.531296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535790 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"2edbb4497336ca91e0d098963c0e23a4c0ec3034d27d21eba6686cf7087ab6cb"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"7477d8fd11607bee41cb06bc251148e912eee98d748970fc55066ec8a4d46692"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535847 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"857ce5e8efadf7ba4914f1404203b8fedd6e3a74b5067548ff5545886615abc5"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.538485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.546975 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.548217 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.048200375 +0000 UTC m=+143.233177451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.554467 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.557683 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.562413 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.563402 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.572243 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"18938ddb45824b203f68a7a7473b0de5b16a114ce9b7b1135790f07bb00bd1f3"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.572302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"b65302a380a18c6d41d67bd6d40e7cf924aef9d0b63ab5c6080db219a603a798"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.584514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" event={"ID":"0f32e760-39ac-4077-9c39-10ac5d621b15","Type":"ContainerStarted","Data":"14b19c2a281ac5ed26da9857cfc65a9d252fa0d2901748b63be801b5d3edeaf0"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.584552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" event={"ID":"0f32e760-39ac-4077-9c39-10ac5d621b15","Type":"ContainerStarted","Data":"c76d7c3e573a4c022aaa621beedea49f5fca0bfd079547c0ae36c77e4f820645"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.590756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.591183 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.606094 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.610325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.619804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.625732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.632214 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.646360 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.649551 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.654749 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.154260502 +0000 UTC m=+143.339237578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.667724 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.680654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.686027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.694803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.708407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.709093 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.716601 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.716839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.730647 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.742611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.746411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.757429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.758414 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.258397037 +0000 UTC m=+143.443374113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.758513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.766558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.773406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.799360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.860741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.861282 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.361268177 +0000 UTC m=+143.546245253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.882268 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.938212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.947201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.961653 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.968762 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.969131 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.469114514 +0000 UTC m=+143.654091590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.976510 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.069725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.070022 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.57001091 +0000 UTC m=+143.754987986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.170920 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.171084 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.67105836 +0000 UTC m=+143.856035436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.172116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.172454 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.672443238 +0000 UTC m=+143.857420314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.201844 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" podStartSLOduration=123.201813106 podStartE2EDuration="2m3.201813106s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.193712473 +0000 UTC m=+143.378689549" watchObservedRunningTime="2026-02-16 14:55:49.201813106 +0000 UTC m=+143.386790172" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.214193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.244633 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" podStartSLOduration=123.244617903 podStartE2EDuration="2m3.244617903s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.243717898 +0000 UTC m=+143.428694974" watchObservedRunningTime="2026-02-16 14:55:49.244617903 +0000 UTC m=+143.429594979" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.273234 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.273744 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.773727344 +0000 UTC m=+143.958704420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.374672 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.380706 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.880688136 +0000 UTC m=+144.065665212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.478908 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" podStartSLOduration=123.478887288 podStartE2EDuration="2m3.478887288s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.421701414 +0000 UTC m=+143.606678500" watchObservedRunningTime="2026-02-16 14:55:49.478887288 +0000 UTC m=+143.663864364" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.479533 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.480008 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.979982448 +0000 UTC m=+144.164959524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.480748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.588709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.589230 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.089212343 +0000 UTC m=+144.274189419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.652249 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" event={"ID":"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c","Type":"ContainerStarted","Data":"67e03a3d7063a32b2ea64590872588044a07583375801f7f696e755e10ce4153"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.669728 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.691617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.691979 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.191961929 +0000 UTC m=+144.376939005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.693230 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sngv5" event={"ID":"df2ed87f-5932-49d3-b0b0-a649c9fe7e75","Type":"ContainerStarted","Data":"522ebcb4a57dd4f489d85a2dc36dad0f463a6362a88da850664f8a0bd42e14e3"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sngv5" event={"ID":"df2ed87f-5932-49d3-b0b0-a649c9fe7e75","Type":"ContainerStarted","Data":"ecd76be3a98cfb3a9db239615ef1f4c79c3baafd6f9564eee32176529547b45d"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.719562 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cdb8w" podStartSLOduration=122.719546898 podStartE2EDuration="2m2.719546898s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.719153167 +0000 UTC m=+143.904130253" watchObservedRunningTime="2026-02-16 14:55:49.719546898 +0000 UTC m=+143.904523974" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.730624 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-sngv5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.730673 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podUID="df2ed87f-5932-49d3-b0b0-a649c9fe7e75" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.772666 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerStarted","Data":"d7100a71796228955a441849456e864611b865b9d54e7079be03574e7b402556"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.772720 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.784194 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc326d4_0e31_4506_a9fd_e8f7c19f1e8e.slice/crio-56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384 WatchSource:0}: Error finding container 56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384: Status 404 returned error can't find the container with id 56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.796194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.796633 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.296617699 +0000 UTC m=+144.481594775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.817038 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"7a35592451f18dea3810831f57181f1e93e0845edf7b517bd33d807aab628aa1"} Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.818113 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc25ae00_316a_4dfb_8a83_72fe2318da5e.slice/crio-253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5 WatchSource:0}: Error finding container 253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5: Status 404 returned error can't find the container with id 253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.819206 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podStartSLOduration=122.81919358 podStartE2EDuration="2m2.81919358s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.786753327 +0000 UTC m=+143.971730403" watchObservedRunningTime="2026-02-16 14:55:49.81919358 +0000 UTC m=+144.004170656" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.834118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"0c81ce5511daaf2f7b984dfefe315f6d23cb6598666007da5b4c9d6130593e3f"} Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.837324 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee710a8b_3390_4749_949f_e8efa983b1ae.slice/crio-7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2 WatchSource:0}: Error finding container 7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2: Status 404 returned error can't find the container with id 7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.846330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" event={"ID":"12d26c94-56da-48ee-8001-e82b50099e6b","Type":"ContainerStarted","Data":"f4d0393a3c846d28f6fd0853519acca5ded45b1e0fcdd2b99c7996680403812f"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.846384 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" event={"ID":"12d26c94-56da-48ee-8001-e82b50099e6b","Type":"ContainerStarted","Data":"39b390a0856fdfa35cd42c1f948ad40325dc2aaa31fcfb1aeec8cdbf1a1ed362"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.904954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.906465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.4064498 +0000 UTC m=+144.591426876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.925564 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.933533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" event={"ID":"933889bd-b762-4afc-9b6c-0088cc6107a5","Type":"ContainerStarted","Data":"203e745e40643fa3477cbb0a1e0a6cbb60bd0bd73eb6703c51bb3c308455d4e5"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.933572 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" event={"ID":"933889bd-b762-4afc-9b6c-0088cc6107a5","Type":"ContainerStarted","Data":"6b5c8a342357bf8d1bb6e69a0ccd024b1e5f8ca04185bdca3ea4bf8525432de0"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.950895 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.950946 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.977066 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.984973 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.004170 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podStartSLOduration=124.004155958 podStartE2EDuration="2m4.004155958s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.000728594 +0000 UTC m=+144.185705680" watchObservedRunningTime="2026-02-16 14:55:50.004155958 +0000 UTC m=+144.189133034" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.007114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.011027 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.511012567 +0000 UTC m=+144.695989643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.122645 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" podStartSLOduration=124.122618887 podStartE2EDuration="2m4.122618887s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.029401943 +0000 UTC m=+144.214379019" watchObservedRunningTime="2026-02-16 14:55:50.122618887 +0000 UTC m=+144.307595963" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.126221 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.126535 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.626519374 +0000 UTC m=+144.811496450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.139708 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.149442 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.227730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.228652 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.728638444 +0000 UTC m=+144.913615520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.320628 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" podStartSLOduration=124.320597414 podStartE2EDuration="2m4.320597414s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.296790169 +0000 UTC m=+144.481767255" watchObservedRunningTime="2026-02-16 14:55:50.320597414 +0000 UTC m=+144.505574490" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.330105 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.330524 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.830506836 +0000 UTC m=+145.015483912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.339066 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" podStartSLOduration=123.339046201 podStartE2EDuration="2m3.339046201s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.337019265 +0000 UTC m=+144.521996331" watchObservedRunningTime="2026-02-16 14:55:50.339046201 +0000 UTC m=+144.524023277" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.402097 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" podStartSLOduration=123.402069775 podStartE2EDuration="2m3.402069775s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.401596912 +0000 UTC m=+144.586573988" watchObservedRunningTime="2026-02-16 14:55:50.402069775 +0000 UTC m=+144.587046851" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.431456 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.431845 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.931829634 +0000 UTC m=+145.116806710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.535898 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.536452 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.036435142 +0000 UTC m=+145.221412218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.536670 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.537122 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.03711458 +0000 UTC m=+145.222091656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.638876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.639804 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.139788555 +0000 UTC m=+145.324765631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.694074 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" podStartSLOduration=123.694048048 podStartE2EDuration="2m3.694048048s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.690636474 +0000 UTC m=+144.875613550" watchObservedRunningTime="2026-02-16 14:55:50.694048048 +0000 UTC m=+144.879025114" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.716483 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.742242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.742592 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.242578543 +0000 UTC m=+145.427555619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.747070 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.809474 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podStartSLOduration=123.809448482 podStartE2EDuration="2m3.809448482s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.747417446 +0000 UTC m=+144.932394532" watchObservedRunningTime="2026-02-16 14:55:50.809448482 +0000 UTC m=+144.994425558" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.838762 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" podStartSLOduration=123.838735238 podStartE2EDuration="2m3.838735238s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.807687384 +0000 UTC m=+144.992664460" watchObservedRunningTime="2026-02-16 14:55:50.838735238 +0000 UTC m=+145.023712304" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.847872 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.848318 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.34827783 +0000 UTC m=+145.533254906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.848741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.849243 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.349230017 +0000 UTC m=+145.534207093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.902901 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.903940 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.912355 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" podStartSLOduration=123.912330013 podStartE2EDuration="2m3.912330013s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.898207844 +0000 UTC m=+145.083184920" watchObservedRunningTime="2026-02-16 14:55:50.912330013 +0000 UTC m=+145.097307089" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.921490 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.921821 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.935986 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.949981 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.950476 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.450459802 +0000 UTC m=+145.635436878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.954263 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.989438 4705 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cm4bk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]log ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]etcd ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/max-in-flight-filter ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 14:55:50 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 14:55:50 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startinformers ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 14:55:50 crc kubenswrapper[4705]: livez check failed Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.989495 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" podUID="2527e960-4f78-42fa-8204-72f3dcf0716d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.996030 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"ae4b6a1b321339206664619a99696dfddec250fbc2f5ecdec70184a6653461c7"} Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.997454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" event={"ID":"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c","Type":"ContainerStarted","Data":"4a68694d9205bdfb87def20290951dac09f439c75f7a789b6d5f85b4fc1f55b1"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.034119 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.038382 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"1f21d9effdf9705ea08bc41127e5cc733c4e91ff79d4185095b243078fd2de65"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.051148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.053027 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.553002223 +0000 UTC m=+145.737979289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.058440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mw9hv" event={"ID":"06c99403-3b09-4401-aa04-41a0ff730c68","Type":"ContainerStarted","Data":"a402c8597dbe05a1e88b62719874fc53d124ee20e23ff3bf26e132efc606488f"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.070301 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z5fgm" event={"ID":"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e","Type":"ContainerStarted","Data":"94d504236ea23083b3f8b6e4e3a7463619ca7b3d1b1cda1464504b78551c2536"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.070355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z5fgm" event={"ID":"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e","Type":"ContainerStarted","Data":"56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.082446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"9bfd7aa05de5195b870054a9be0207c2efeaea82c9337684f6869c68482e5883"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.084412 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" podStartSLOduration=124.084397996 podStartE2EDuration="2m4.084397996s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.049472266 +0000 UTC m=+145.234449352" watchObservedRunningTime="2026-02-16 14:55:51.084397996 +0000 UTC m=+145.269375072" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.095157 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.097456 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-z5fgm" podStartSLOduration=6.097439965 podStartE2EDuration="6.097439965s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.09288262 +0000 UTC m=+145.277859716" watchObservedRunningTime="2026-02-16 14:55:51.097439965 +0000 UTC m=+145.282417041" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.099219 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.112758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerStarted","Data":"253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.149880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"c8214de32fdc1fc886409072431943019d130477219756c277a160eecedfb7f4"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.151826 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.153752 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.653733214 +0000 UTC m=+145.838710290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.155071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"3c4849cb214c7aa28bc73c13495530f85b602e022011781000b33ac7d07225ba"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.171600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" event={"ID":"afea24b5-a4cc-48f0-869a-f45518e48dd1","Type":"ContainerStarted","Data":"8d46c962c1d4c6c0e783070ab8b6586f1a2d8ec5957bf6fd2fe4928fa619c32f"} Feb 16 14:55:51 crc kubenswrapper[4705]: W0216 14:55:51.194783 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4689fb61_8aab_4ec2_b20b_5f4d8753758f.slice/crio-8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5 WatchSource:0}: Error finding container 8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5: Status 404 returned error can't find the container with id 8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5 Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.197210 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerStarted","Data":"7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.200974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"aeded7686c16a893794a53b0a863cccf844479315d2809b50870eb0997572f6d"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.214524 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.222462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"5274a293877d22f88a5c94d288c9fa460fc4bd8cf8f1896c3b6c419eafa2460b"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.229133 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.240353 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerStarted","Data":"a885e38805c34d5c1e7c89b9f1f29de1c4b5e2713a9a9b37541794c592748f30"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.241348 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.254242 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.256086 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.260092 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.760079949 +0000 UTC m=+145.945057025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266054 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podStartSLOduration=124.266039973 podStartE2EDuration="2m4.266039973s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.259588616 +0000 UTC m=+145.444565682" watchObservedRunningTime="2026-02-16 14:55:51.266039973 +0000 UTC m=+145.451017049" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266525 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266740 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266861 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.277984 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.344668 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.357631 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.357973 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.857950502 +0000 UTC m=+146.042927578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.358868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.360209 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:51 crc kubenswrapper[4705]: W0216 14:55:51.379904 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b436476_c64b_40ca_a644_1067ccefcecc.slice/crio-b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204 WatchSource:0}: Error finding container b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204: Status 404 returned error can't find the container with id b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204 Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.437678 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.464112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.464744 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.96472922 +0000 UTC m=+146.149706296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.491339 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.550741 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.564924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.565303 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.065285436 +0000 UTC m=+146.250262512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.670085 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.670487 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.17047405 +0000 UTC m=+146.355451126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.771001 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.773849 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.273804282 +0000 UTC m=+146.458781358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.873135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.873524 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.373512226 +0000 UTC m=+146.558489302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.974070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.974223 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.474196485 +0000 UTC m=+146.659173561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.974734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.975055 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.475042149 +0000 UTC m=+146.660019225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.078102 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.079930 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.579892343 +0000 UTC m=+146.764869429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.185451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.186393 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.686358532 +0000 UTC m=+146.871335608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.243750 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-sngv5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.243807 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podUID="df2ed87f-5932-49d3-b0b0-a649c9fe7e75" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.254456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" event={"ID":"bd426fc6-0156-4802-b9ff-69cae6e061b6","Type":"ContainerStarted","Data":"68afa5d37dcb0bd560ce9e68615d6d01e3af5eb2e1b9934ea3eee2ad11045301"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.254515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" event={"ID":"bd426fc6-0156-4802-b9ff-69cae6e061b6","Type":"ContainerStarted","Data":"f5e1241d8cceceaa4cf7955c96694655910c3aac804d9458d7eed5e2d8f7c7a9"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.258982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" event={"ID":"f7690b59-a363-4f97-aa47-a6bb9fb41d20","Type":"ContainerStarted","Data":"91ddc1445c6f2b7d384e8ff92f62eb2c2288e3b57d3b21afb21f402fdbc7991a"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.273183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" event={"ID":"4689fb61-8aab-4ec2-b20b-5f4d8753758f","Type":"ContainerStarted","Data":"bc577dca0a9f4a27bee132034cc2355f85c9aeb8ba0246369a6b355614b69e1b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.273225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" event={"ID":"4689fb61-8aab-4ec2-b20b-5f4d8753758f","Type":"ContainerStarted","Data":"8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.274189 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.289115 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.290537 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9bb6j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.290605 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" podUID="4689fb61-8aab-4ec2-b20b-5f4d8753758f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.291511 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.791462154 +0000 UTC m=+146.976439230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.292146 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" podStartSLOduration=125.292133482 podStartE2EDuration="2m5.292133482s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.291622148 +0000 UTC m=+146.476599224" watchObservedRunningTime="2026-02-16 14:55:52.292133482 +0000 UTC m=+146.477110558" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.304898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"a5b890aba6e5606e2ebb8bcd914ccb26f505b11671e29866da7c47d2811f1b6c"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.304941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"8421e39a1706977d4233d2a28550032e48de135d17fe66d0a57c022891f85f71"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.329926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerStarted","Data":"5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.340083 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" podStartSLOduration=125.340060141 podStartE2EDuration="2m5.340060141s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.311809703 +0000 UTC m=+146.496786789" watchObservedRunningTime="2026-02-16 14:55:52.340060141 +0000 UTC m=+146.525037207" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.347812 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"60dc9e6d2b0dd282a9c70edcd58ad154c93253f67a5badffeab59c9efb60ba32"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.347864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"a790d572800fa52770fbc18aa7470a67020926ea0ca283dfe170ee99162ad461"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.348447 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.356134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" event={"ID":"6fee83f9-9187-4930-80d9-8337052eb6f7","Type":"ContainerStarted","Data":"7109ec3cdda2df4766176fee66745b5558e04a716efaf4c8fd9fbea6d72add9b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.367346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"dfd833896e0c65af112ccb79ef8a2148496798f5064351c4c9a8d3381b88f470"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.372693 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"953a735848373500e54d58776d2aab9c02101767e845e22ab939680eb1206ed7"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.372755 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"8caa093e711405ff1e9e52c68f7fcb4e7f9b360b2ca26865fd633cebd5c52ebd"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.385986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" event={"ID":"611cca5d-97b7-4ca5-b011-5bbf06e79b58","Type":"ContainerStarted","Data":"2a59e0a73356d875731e1cb70771b07653505ff59d10ba796560f24f5cf8e232"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.386022 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" event={"ID":"611cca5d-97b7-4ca5-b011-5bbf06e79b58","Type":"ContainerStarted","Data":"3a0b3f9c6befc99daa9a4323fb104c136719bd382793f83c5f7e9826159c1080"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.387724 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.390419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.392269 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.892249386 +0000 UTC m=+147.077226462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.393262 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gbsfs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.393341 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" podUID="611cca5d-97b7-4ca5-b011-5bbf06e79b58" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.401328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mw9hv" event={"ID":"06c99403-3b09-4401-aa04-41a0ff730c68","Type":"ContainerStarted","Data":"d1d0cbf507463f137a43d9d446d862f598f014f6cadb38e6090bb89daa04367f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.409638 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" podStartSLOduration=125.409610014 podStartE2EDuration="2m5.409610014s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.408298898 +0000 UTC m=+146.593275974" watchObservedRunningTime="2026-02-16 14:55:52.409610014 +0000 UTC m=+146.594587090" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.415955 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" podStartSLOduration=125.415924818 podStartE2EDuration="2m5.415924818s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.341029767 +0000 UTC m=+146.526006843" watchObservedRunningTime="2026-02-16 14:55:52.415924818 +0000 UTC m=+146.600901884" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.430905 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" podStartSLOduration=125.430890869 podStartE2EDuration="2m5.430890869s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.429292705 +0000 UTC m=+146.614269791" watchObservedRunningTime="2026-02-16 14:55:52.430890869 +0000 UTC m=+146.615867945" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.438819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"d90ba896f91c00043fde2edf9950b8bd05b49dfe70d22b016de544c96298487f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.438853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"bbc88e5f4ba26b884a7dd6bc577ff0c062e6d2bfc8bbed6904f6d880dcc0c28f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.453157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"3ac9e1eeee88573d4bc5b847fdb96c2b64aa0a38788502289697f086621b357b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.471827 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"9022cd9c4285e25067254f40f80afe5ebe0ce66f82e73c29e4ccfb7b08563c71"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.482638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" event={"ID":"0b436476-c64b-40ca-a644-1067ccefcecc","Type":"ContainerStarted","Data":"76b2e089be137083b0a361614d3a7524dfbd1bd739ee4f9ff6905cfa40bf6639"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.482696 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" event={"ID":"0b436476-c64b-40ca-a644-1067ccefcecc","Type":"ContainerStarted","Data":"b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.488840 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" event={"ID":"226fa561-a051-4bf5-8d7b-b2d1e3871e81","Type":"ContainerStarted","Data":"229d2776fb45a00d1d94b4f3c0366d2b69e2686c5aff92a2fac875499d9bc3ff"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.488912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" event={"ID":"226fa561-a051-4bf5-8d7b-b2d1e3871e81","Type":"ContainerStarted","Data":"728be1cb1a52cc6c12a82eb0e16b5155e6e0db2af291d72a1f637d8e8dec1999"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.489841 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" podStartSLOduration=125.48981489 podStartE2EDuration="2m5.48981489s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.486810158 +0000 UTC m=+146.671787234" watchObservedRunningTime="2026-02-16 14:55:52.48981489 +0000 UTC m=+146.674791966" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.492292 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.493249 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.993226364 +0000 UTC m=+147.178203440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.496344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" event={"ID":"afea24b5-a4cc-48f0-869a-f45518e48dd1","Type":"ContainerStarted","Data":"0e73707b81a97013e5668c5ddf5692903e9ee83472977de0b73d3b3d64ef2b7b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.501796 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mw9hv" podStartSLOduration=125.501764359 podStartE2EDuration="2m5.501764359s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.458465308 +0000 UTC m=+146.643442394" watchObservedRunningTime="2026-02-16 14:55:52.501764359 +0000 UTC m=+146.686741435" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.520081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerStarted","Data":"b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.522527 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.522591 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.545285 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerStarted","Data":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.573592 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" podStartSLOduration=125.573569835 podStartE2EDuration="2m5.573569835s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.522051127 +0000 UTC m=+146.707028203" watchObservedRunningTime="2026-02-16 14:55:52.573569835 +0000 UTC m=+146.758546911" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.575186 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" podStartSLOduration=125.575175039 podStartE2EDuration="2m5.575175039s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.570797248 +0000 UTC m=+146.755774324" watchObservedRunningTime="2026-02-16 14:55:52.575175039 +0000 UTC m=+146.760152125" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.595274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.597060 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.09703043 +0000 UTC m=+147.282007506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.608584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"26ea7178ec1095bdf653a1082865f9a022028975e62b29f28fd650012b718ed4"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.632591 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" podStartSLOduration=125.632568758 podStartE2EDuration="2m5.632568758s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.631780356 +0000 UTC m=+146.816757432" watchObservedRunningTime="2026-02-16 14:55:52.632568758 +0000 UTC m=+146.817545834" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.638565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerStarted","Data":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.638887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerStarted","Data":"faa1e5018382734db35e1205c39088b34faea391ec6e62672b88da102016cb47"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.639930 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.644582 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.645208 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.682015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" event={"ID":"e037a092-dcda-4227-9872-ea455a432ac6","Type":"ContainerStarted","Data":"40ae97dfb3ba0189218e688201750300667570f00b350f3cd03ceb79e94ebbbe"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.696605 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fnrqq" podStartSLOduration=125.696580719 podStartE2EDuration="2m5.696580719s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.695653123 +0000 UTC m=+146.880630199" watchObservedRunningTime="2026-02-16 14:55:52.696580719 +0000 UTC m=+146.881557795" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.697345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.697910 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.197885845 +0000 UTC m=+147.382862931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.714864 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" podStartSLOduration=125.714846621 podStartE2EDuration="2m5.714846621s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.714581944 +0000 UTC m=+146.899559040" watchObservedRunningTime="2026-02-16 14:55:52.714846621 +0000 UTC m=+146.899823697" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.722917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jtcsx" event={"ID":"9e989356-1c20-489c-84a5-6437a37ab683","Type":"ContainerStarted","Data":"068c9f5bc65c9dc38f876816acca8d0b581141d4078e5147e17c02671e2c25dc"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.722979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jtcsx" event={"ID":"9e989356-1c20-489c-84a5-6437a37ab683","Type":"ContainerStarted","Data":"599b4e599ca702b8767cf724c2dfd379e411b6b282b955996993be00e690fcab"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.740886 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" event={"ID":"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb","Type":"ContainerStarted","Data":"73d189304e4587d687987b13aecf131755858c9d304d1093f7f8be639c20b8ad"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.740926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" event={"ID":"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb","Type":"ContainerStarted","Data":"9dca879c5ddabac706c5cf1e05914c586f6c8dafa9be326aa157fb3259f02090"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.757686 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" podStartSLOduration=125.757665649 podStartE2EDuration="2m5.757665649s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.754982776 +0000 UTC m=+146.939959862" watchObservedRunningTime="2026-02-16 14:55:52.757665649 +0000 UTC m=+146.942642745" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.759053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.785080 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" podStartSLOduration=125.785052973 podStartE2EDuration="2m5.785052973s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.784470197 +0000 UTC m=+146.969447283" watchObservedRunningTime="2026-02-16 14:55:52.785052973 +0000 UTC m=+146.970030049" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.810446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.812291 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.312275442 +0000 UTC m=+147.497252518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.914993 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" podStartSLOduration=125.914959226 podStartE2EDuration="2m5.914959226s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.914491483 +0000 UTC m=+147.099468569" watchObservedRunningTime="2026-02-16 14:55:52.914959226 +0000 UTC m=+147.099936302" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.918178 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podStartSLOduration=125.918164174 podStartE2EDuration="2m5.918164174s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.885699901 +0000 UTC m=+147.070676967" watchObservedRunningTime="2026-02-16 14:55:52.918164174 +0000 UTC m=+147.103141250" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.915402 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.915465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.415448799 +0000 UTC m=+147.600425875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.922546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.923425 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.423412218 +0000 UTC m=+147.608389294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.944478 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.964508 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:52 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:52 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:52 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.964641 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.000442 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jtcsx" podStartSLOduration=8.000420107 podStartE2EDuration="8.000420107s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.000195401 +0000 UTC m=+147.185172497" watchObservedRunningTime="2026-02-16 14:55:53.000420107 +0000 UTC m=+147.185397183" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.000968 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" podStartSLOduration=126.000962862 podStartE2EDuration="2m6.000962862s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.942609346 +0000 UTC m=+147.127586422" watchObservedRunningTime="2026-02-16 14:55:53.000962862 +0000 UTC m=+147.185939938" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.024198 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.024736 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.524707905 +0000 UTC m=+147.709684981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.126182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.126739 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.626722842 +0000 UTC m=+147.811699918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.227837 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.228051 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.728013248 +0000 UTC m=+147.912990324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.228300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.228820 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.72879647 +0000 UTC m=+147.913773546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.329222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.329554 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.829476729 +0000 UTC m=+148.014453805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.329674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.330183 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.830171939 +0000 UTC m=+148.015149005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.430960 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.431377 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.931343642 +0000 UTC m=+148.116320718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.533355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.533877 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.033847402 +0000 UTC m=+148.218824468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.635853 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.636325 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.13629723 +0000 UTC m=+148.321274316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.738271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.738718 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.238702087 +0000 UTC m=+148.423679163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.746826 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" event={"ID":"6fee83f9-9187-4930-80d9-8337052eb6f7","Type":"ContainerStarted","Data":"a4360073b0475d8b8ad089b106c09ad0abbaa1c4d93dee9146c09db962b62639"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.749165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" event={"ID":"f7690b59-a363-4f97-aa47-a6bb9fb41d20","Type":"ContainerStarted","Data":"94202f2bbfbb715a6c179c18b44dbd24324b818e885cde9af3f6ac6f8e340b94"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.749829 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.753396 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"b86eea4dbe7a3fecfe9d2221570c2a653671135d02a1b32158ff51c4a7908d92"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"0149b75f4a45e69e00a8332c35a3085f0c034ead6c166884e9f5ba125282c9fa"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"de2296abb94a076fdd50c9815476f641f0fb7c3d1cd2e065f30eab0914dd7599"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755895 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.760600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"13ecb7ecb85a8d5af3e60edfaba13d180fea38c883188f0cf8a4b6e1f1af6b93"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.763086 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" event={"ID":"e037a092-dcda-4227-9872-ea455a432ac6","Type":"ContainerStarted","Data":"6db88e5100c7618d96a9b82131c65acba7b2b387f459ca837994f4a5f99468b4"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.764896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"92fa4d1fa19e2a9e53deac5f0674644e1fad929a54cbc4f8a3e6ae2b69d0f768"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.766688 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerID="5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97" exitCode=0 Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.767647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerDied","Data":"5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.770486 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.770552 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.777544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.777606 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.789356 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.825250 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" podStartSLOduration=126.825188327 podStartE2EDuration="2m6.825188327s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.814218715 +0000 UTC m=+147.999195791" watchObservedRunningTime="2026-02-16 14:55:53.825188327 +0000 UTC m=+148.010165403" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.844730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.846321 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.346293977 +0000 UTC m=+148.531271053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.846760 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" podStartSLOduration=126.846726239 podStartE2EDuration="2m6.846726239s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.843992184 +0000 UTC m=+148.028969260" watchObservedRunningTime="2026-02-16 14:55:53.846726239 +0000 UTC m=+148.031703315" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.910148 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" podStartSLOduration=126.910114373 podStartE2EDuration="2m6.910114373s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.893040884 +0000 UTC m=+148.078017960" watchObservedRunningTime="2026-02-16 14:55:53.910114373 +0000 UTC m=+148.095091449" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.947106 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:53 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:53 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:53 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.947183 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.948013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.948485 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.448459278 +0000 UTC m=+148.633436344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.986956 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hnkwm" podStartSLOduration=8.986938127 podStartE2EDuration="8.986938127s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.985930069 +0000 UTC m=+148.170907145" watchObservedRunningTime="2026-02-16 14:55:53.986938127 +0000 UTC m=+148.171915193" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.049631 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.050101 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.550081824 +0000 UTC m=+148.735058900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.052143 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" podStartSLOduration=127.05211173 podStartE2EDuration="2m7.05211173s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:54.049054906 +0000 UTC m=+148.234031972" watchObservedRunningTime="2026-02-16 14:55:54.05211173 +0000 UTC m=+148.237088806" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.153448 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.653424467 +0000 UTC m=+148.838401543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.153661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.254609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.254948 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.754916739 +0000 UTC m=+148.939893815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.255139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.255538 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.755529926 +0000 UTC m=+148.940507002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.346788 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.356070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.356240 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.856210376 +0000 UTC m=+149.041187452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.356426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.356788 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.856780241 +0000 UTC m=+149.041757317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458085 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.458304 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.958255493 +0000 UTC m=+149.143232569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458656 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458828 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.459774 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.460125 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.960108894 +0000 UTC m=+149.145085980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.466403 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.481832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.482950 4705 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.486655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.542322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.561413 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.561948 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:55.061924735 +0000 UTC m=+149.246901811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.621978 4705 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T14:55:54.482999294Z","Handler":null,"Name":""} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.643212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.656407 4705 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.656459 4705 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.658884 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.664158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.669625 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.669670 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.790531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"ca3659f781365b1b516d4f96015cc54b433e0791327ad0caee81b11538094e88"} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.790571 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"ef139cdc6ba869521bc492e1a340603d4a5caa0d83f366580d8489b61027bf44"} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.792432 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.792473 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.815791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.848034 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.886921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.944028 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.976121 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:54 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:54 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:54 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.976240 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.163164 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.180270 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.182553 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.184865 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.228530 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:55 crc kubenswrapper[4705]: W0216 14:55:55.278000 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e WatchSource:0}: Error finding container 8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e: Status 404 returned error can't find the container with id 8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.284610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302828 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302949 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341478 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: E0216 14:55:55.341802 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341933 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.343475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.349048 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.352581 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404702 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.405015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.405051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.406320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.406949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.407234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.409449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.413028 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s" (OuterVolumeSpecName: "kube-api-access-xgh9s") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "kube-api-access-xgh9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.424710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506444 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506464 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506475 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.534888 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.535787 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.541855 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.554670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.608250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.609278 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.609424 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.610037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.610351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.627435 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.660188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710580 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710636 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.744103 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.745356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.762792 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.765092 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerDied","Data":"253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799674 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799723 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.806918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"2b803ff58466a770443c56d15dd4b3d36062da2a65c18df51765757a11c9bf30"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.814496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.826953 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c2eb3901f0eda90e31a80d4a31d6c7490f6649027ed22a1423737b1c2301844"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c7527bb1853886ed6ebb90e7c916e30c9eaf1b37102eb78025d1dfa09c6d6b79"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.829515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1dd0f68350423aa36bdc537a9fee235331107f7207ea48f52de7bde18793f670"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.829552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.831186 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d151a78ee3941775c0d63021654ae12ecbd51b8105e7a3c9d9380800b9e006c2"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.831208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e93b0f4b5c2f820c15512e9d816331baadb5e71875e205a0ff44977d644e909"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.833300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"3d2f0059d40b4313cb2192bb0c8318a3e59e5de2da0badc178590ca35c5bf347"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848314 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerStarted","Data":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848424 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerStarted","Data":"a85e7e62d04fb828a3650bdfb354f55b8cca777243fccbeb90166d171d6b20fc"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848512 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" podStartSLOduration=10.84849355 podStartE2EDuration="10.84849355s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:55.844633563 +0000 UTC m=+150.029610629" watchObservedRunningTime="2026-02-16 14:55:55.84849355 +0000 UTC m=+150.033470626" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.849125 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.859532 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.936170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.942618 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:55 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:55 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:55 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.942674 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.946698 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.946929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.959799 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" podStartSLOduration=128.959779761 podStartE2EDuration="2m8.959779761s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:55.959276777 +0000 UTC m=+150.144253853" watchObservedRunningTime="2026-02-16 14:55:55.959779761 +0000 UTC m=+150.144756837" Feb 16 14:55:55 crc kubenswrapper[4705]: W0216 14:55:55.963741 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8efc871_44f0_4bbd_b639_6adaee23319a.slice/crio-9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2 WatchSource:0}: Error finding container 9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2: Status 404 returned error can't find the container with id 9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018637 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018658 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.020442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.022038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.059435 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.064835 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.227068 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:56 crc kubenswrapper[4705]: W0216 14:55:56.274553 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1a76ff_82ae_4dac_88d2_20e6858835e3.slice/crio-8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938 WatchSource:0}: Error finding container 8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938: Status 404 returned error can't find the container with id 8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.437793 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.460639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.853563 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.853877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.855351 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859051 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859090 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerStarted","Data":"9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862309 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerStarted","Data":"84b0c4e14a3064d4d96f1f68cbab03b366c6b38944839fb2b7297a8f31d08a3b"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864313 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864392 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerStarted","Data":"8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.947982 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:56 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:56 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:56 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.948058 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.257985 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258052 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258103 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258187 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.339812 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.340911 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.344438 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.355301 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.442737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.442813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.443007 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544893 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.545386 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.545590 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.579394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.658711 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.737906 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.738870 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.750388 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.757948 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.758678 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.760635 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.763422 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.780339 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852539 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.946690 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:57 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:57 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:57 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.946776 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953668 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953751 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953910 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.969184 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.973496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.975255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.056631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.078095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.328929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.342302 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.344505 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.348427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.362324 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:58 crc kubenswrapper[4705]: W0216 14:55:58.371213 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee875e7_6eab_4220_a29d_316c22f70703.slice/crio-8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413 WatchSource:0}: Error finding container 8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413: Status 404 returned error can't find the container with id 8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.401126 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: W0216 14:55:58.402507 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod517926f3_df0a_4a5d_8806_80753c810a82.slice/crio-587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21 WatchSource:0}: Error finding container 587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21: Status 404 returned error can't find the container with id 587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466063 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.520530 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.520623 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.525403 4705 patch_prober.go:28] interesting pod/console-f9d7485db-fnrqq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.525565 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fnrqq" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568513 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.569056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.569296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.590683 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.644984 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.645841 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.648499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.648882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.649123 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.699147 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.713646 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.746647 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.747989 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.754350 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.771792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.771847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876637 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876688 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900875 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6" exitCode=0 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900957 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerStarted","Data":"9e7c06275441e0dc9753d3e97f80b0b2fa0173ed74928bf3711fd998b37c0d36"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.911167 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.920631 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.920682 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.934572 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerStarted","Data":"587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.939549 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.945879 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:58 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:58 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:58 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.945953 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.973326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.977948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.978019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.978091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.979911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.980145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.996197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.082531 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.225828 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.511880 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:59 crc kubenswrapper[4705]: W0216 14:55:59.527175 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d685f5_d57e_434b_93c8_727195de9479.slice/crio-73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701 WatchSource:0}: Error finding container 73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701: Status 404 returned error can't find the container with id 73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.575451 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.943385 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.972313 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.972400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980098 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980173 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerStarted","Data":"73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996216 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996352 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"5ca975ac41d20405951f16e100085714e84618ea7435589dc42061daef0e3c0d"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.007664 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerStarted","Data":"c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.026253 4705 generic.go:334] "Generic (PLEG): container finished" podID="517926f3-df0a-4a5d-8806-80753c810a82" containerID="b36af6dea40a2cc15704cf0e887eaea1973a1fc8db61b4e54a43cdebd09a1376" exitCode=0 Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.026845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerDied","Data":"b36af6dea40a2cc15704cf0e887eaea1973a1fc8db61b4e54a43cdebd09a1376"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.037743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.069948 4705 generic.go:334] "Generic (PLEG): container finished" podID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerID="e8e80d6deccafa5829fccbb82a70b8cb3676a15871eda8d63e729b44d986ab2b" exitCode=0 Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.070200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerDied","Data":"e8e80d6deccafa5829fccbb82a70b8cb3676a15871eda8d63e729b44d986ab2b"} Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.685757 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.685819 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.559250 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650006 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"517926f3-df0a-4a5d-8806-80753c810a82\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"517926f3-df0a-4a5d-8806-80753c810a82\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650189 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "517926f3-df0a-4a5d-8806-80753c810a82" (UID: "517926f3-df0a-4a5d-8806-80753c810a82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650474 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.653012 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.659110 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "517926f3-df0a-4a5d-8806-80753c810a82" (UID: "517926f3-df0a-4a5d-8806-80753c810a82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.751861 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752504 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752576 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" (UID: "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.757625 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" (UID: "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.856047 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.856502 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerDied","Data":"c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7"} Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092204 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092391 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097777 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerDied","Data":"587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21"} Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097835 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097942 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.749541 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:56:07 crc kubenswrapper[4705]: I0216 14:56:07.265485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:56:08 crc kubenswrapper[4705]: I0216 14:56:08.524984 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:56:08 crc kubenswrapper[4705]: I0216 14:56:08.532553 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.682939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.690495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.973630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.657085 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.657321 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" containerID="cri-o://579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" gracePeriod=30 Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.690501 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.691515 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" containerID="cri-o://b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" gracePeriod=30 Feb 16 14:56:14 crc kubenswrapper[4705]: I0216 14:56:14.853648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.187193 4705 generic.go:334] "Generic (PLEG): container finished" podID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerID="579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" exitCode=0 Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.187242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerDied","Data":"579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19"} Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.854817 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s6knp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.854944 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 14:56:16 crc kubenswrapper[4705]: I0216 14:56:16.199521 4705 generic.go:334] "Generic (PLEG): container finished" podID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerID="b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" exitCode=0 Feb 16 14:56:16 crc kubenswrapper[4705]: I0216 14:56:16.199939 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerDied","Data":"b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d"} Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.384773 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.390827 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.463623 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.464340 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.464472 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.464589 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.464684 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.466847 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.466888 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.466914 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.466923 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467301 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467333 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467344 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467361 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467951 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.468070 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.533855 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.533997 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556597 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556771 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556818 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556847 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca" (OuterVolumeSpecName: "client-ca") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558453 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config" (OuterVolumeSpecName: "config") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559000 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config" (OuterVolumeSpecName: "config") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559103 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559243 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559257 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559269 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559279 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.560173 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46" (OuterVolumeSpecName: "kube-api-access-x2k46") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "kube-api-access-x2k46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.574003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd" (OuterVolumeSpecName: "kube-api-access-r5khd") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "kube-api-access-r5khd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661399 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661418 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661432 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661445 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661456 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.662055 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.662509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.666890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.682062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.791325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerDied","Data":"68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a"} Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223213 4705 scope.go:117] "RemoveContainer" containerID="579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223357 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.238358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerDied","Data":"a885e38805c34d5c1e7c89b9f1f29de1c4b5e2713a9a9b37541794c592748f30"} Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.238519 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.267016 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.269603 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.274676 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.280000 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.427275 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" path="/var/lib/kubelet/pods/51cb62a1-dd06-4f6b-aa37-c824973a7df0/volumes" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.428198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" path="/var/lib/kubelet/pods/a8302bc0-d3ed-4950-a728-5569d512a90c/volumes" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.270858 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.272332 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.273891 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275201 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275208 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.276749 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.276850 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.289204 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406876 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.407035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.507996 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508313 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508631 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.509591 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.509950 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.519148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.523258 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.603906 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.022256 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.023492 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hr5j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ngfnt_openshift-marketplace(1f1a76ff-82ae-4dac-88d2-20e6858835e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.024413 4705 scope.go:117] "RemoveContainer" containerID="b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.024629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.298049 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b"} Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.320690 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.414696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:27 crc kubenswrapper[4705]: W0216 14:56:27.427098 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f71fd9_bba2_481c_8b42_46894c93e49d.slice/crio-4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9 WatchSource:0}: Error finding container 4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9: Status 404 returned error can't find the container with id 4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9 Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.427696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:56:27 crc kubenswrapper[4705]: W0216 14:56:27.430324 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67dea3c6_e6a4_4078_9bf2_6928c39f498b.slice/crio-0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045 WatchSource:0}: Error finding container 0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045: Status 404 returned error can't find the container with id 0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045 Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.437428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:28 crc kubenswrapper[4705]: E0216 14:56:28.175752 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d685f5_d57e_434b_93c8_727195de9479.slice/crio-22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130.scope\": RecentStats: unable to find data in memory cache]" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.309712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerStarted","Data":"5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.309763 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerStarted","Data":"9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.311456 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.318722 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.319606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.339906 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.340015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.343097 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.343153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350115 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350233 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350887 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" podStartSLOduration=16.350869877 podStartE2EDuration="16.350869877s" podCreationTimestamp="2026-02-16 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:28.348977915 +0000 UTC m=+182.533954991" watchObservedRunningTime="2026-02-16 14:56:28.350869877 +0000 UTC m=+182.535846973" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.356192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.359575 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerStarted","Data":"db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.359599 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerStarted","Data":"4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.360224 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.364925 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.369279 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.369326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.373573 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.373614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.390665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"3bad768853b1c2d8d2d2f1e547c5acf2aac3823d8b60521f81be7dba9e0d242e"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.390719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.546137 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" podStartSLOduration=16.546101338 podStartE2EDuration="16.546101338s" podCreationTimestamp="2026-02-16 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:28.539049324 +0000 UTC m=+182.724026420" watchObservedRunningTime="2026-02-16 14:56:28.546101338 +0000 UTC m=+182.731078434" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.637975 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.676280 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.399734 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"65daf2952e4d153e851655f006c9bc78eeec8179a7fd2a728b9c8943b8801e3e"} Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.404911 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7" exitCode=0 Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.405207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7"} Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.420631 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8m64f" podStartSLOduration=162.420612055 podStartE2EDuration="2m42.420612055s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:29.419528165 +0000 UTC m=+183.604505251" watchObservedRunningTime="2026-02-16 14:56:29.420612055 +0000 UTC m=+183.605589121" Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.418991 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerStarted","Data":"3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870"} Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.440708 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmh5s" podStartSLOduration=2.941285534 podStartE2EDuration="34.440681969s" podCreationTimestamp="2026-02-16 14:55:57 +0000 UTC" firstStartedPulling="2026-02-16 14:55:58.904549763 +0000 UTC m=+153.089526829" lastFinishedPulling="2026-02-16 14:56:30.403946188 +0000 UTC m=+184.588923264" observedRunningTime="2026-02-16 14:56:31.440177075 +0000 UTC m=+185.625154161" watchObservedRunningTime="2026-02-16 14:56:31.440681969 +0000 UTC m=+185.625659045" Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.684390 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.684472 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.599999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.600226 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" containerID="cri-o://db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" gracePeriod=30 Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.702783 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.704709 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" containerID="cri-o://5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" gracePeriod=30 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.437602 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439572 4705 generic.go:334] "Generic (PLEG): container finished" podID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerID="5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" exitCode=0 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerDied","Data":"5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerDied","Data":"9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439667 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.445336 4705 generic.go:334] "Generic (PLEG): container finished" podID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerID="db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" exitCode=0 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.445419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerDied","Data":"db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.462193 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkkgp" podStartSLOduration=3.142344556 podStartE2EDuration="35.462169241s" podCreationTimestamp="2026-02-16 14:55:58 +0000 UTC" firstStartedPulling="2026-02-16 14:56:00.002287212 +0000 UTC m=+154.187264288" lastFinishedPulling="2026-02-16 14:56:32.322111897 +0000 UTC m=+186.507088973" observedRunningTime="2026-02-16 14:56:33.459318122 +0000 UTC m=+187.644295218" watchObservedRunningTime="2026-02-16 14:56:33.462169241 +0000 UTC m=+187.647146317" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.481785 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.602640 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603331 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.604059 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config" (OuterVolumeSpecName: "config") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.604615 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca" (OuterVolumeSpecName: "client-ca") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.612484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.617000 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4" (OuterVolumeSpecName: "kube-api-access-2zkw4") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "kube-api-access-2zkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705324 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705436 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705451 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705460 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.855314 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009488 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009514 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009604 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010432 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config" (OuterVolumeSpecName: "config") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010557 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010586 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca" (OuterVolumeSpecName: "client-ca") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.017943 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.018122 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff" (OuterVolumeSpecName: "kube-api-access-wcrff") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "kube-api-access-wcrff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111681 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111725 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111736 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111745 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111758 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.281779 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:34 crc kubenswrapper[4705]: E0216 14:56:34.282214 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282237 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: E0216 14:56:34.282248 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282256 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282380 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282399 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282941 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.285737 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.286818 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.295486 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.299760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430065 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430239 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430263 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430388 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430421 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerDied","Data":"4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9"} Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453529 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453554 4705 scope.go:117] "RemoveContainer" containerID="db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.456442 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.456514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b"} Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.498535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvxpr" podStartSLOduration=2.872769518 podStartE2EDuration="39.498517812s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.855128762 +0000 UTC m=+151.040105828" lastFinishedPulling="2026-02-16 14:56:33.480877036 +0000 UTC m=+187.665854122" observedRunningTime="2026-02-16 14:56:34.478495371 +0000 UTC m=+188.663472467" watchObservedRunningTime="2026-02-16 14:56:34.498517812 +0000 UTC m=+188.683494888" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.498738 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.502343 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.508041 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.510732 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.531636 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532294 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.533964 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.534074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.539880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.542854 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.551241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.552214 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.555254 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.638564 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.659326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:35 crc kubenswrapper[4705]: I0216 14:56:35.542591 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:35 crc kubenswrapper[4705]: I0216 14:56:35.542666 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.224243 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.226794 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.228933 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.229742 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.236572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.257803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.257865 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359178 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.378661 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.427306 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" path="/var/lib/kubelet/pods/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650/volumes" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.428229 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" path="/var/lib/kubelet/pods/42f71fd9-bba2-481c-8b42-46894c93e49d/volumes" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.436639 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.522521 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.568735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.047390 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:37 crc kubenswrapper[4705]: W0216 14:56:37.056467 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc70a9e_0338_4f1f_8c4b_1ef8d62b424a.slice/crio-c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2 WatchSource:0}: Error finding container c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2: Status 404 returned error can't find the container with id c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2 Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.119655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:37 crc kubenswrapper[4705]: W0216 14:56:37.141883 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod082d4064_6b1c_4a39_9839_3466e7a1ce3a.slice/crio-1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4 WatchSource:0}: Error finding container 1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4: Status 404 returned error can't find the container with id 1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4 Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.202780 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.476450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerStarted","Data":"7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.481770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerStarted","Data":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerStarted","Data":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483518 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerStarted","Data":"1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483616 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.485540 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.488000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerStarted","Data":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.490840 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerStarted","Data":"d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493216 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerStarted","Data":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493247 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerStarted","Data":"c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493842 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.528617 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bw88w" podStartSLOduration=2.926467036 podStartE2EDuration="42.528596081s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.86342465 +0000 UTC m=+151.048401726" lastFinishedPulling="2026-02-16 14:56:36.465553695 +0000 UTC m=+190.650530771" observedRunningTime="2026-02-16 14:56:37.511466639 +0000 UTC m=+191.696443725" watchObservedRunningTime="2026-02-16 14:56:37.528596081 +0000 UTC m=+191.713573157" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.531462 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vb279" podStartSLOduration=3.028993978 podStartE2EDuration="40.531449729s" podCreationTimestamp="2026-02-16 14:55:57 +0000 UTC" firstStartedPulling="2026-02-16 14:55:58.92586426 +0000 UTC m=+153.110841336" lastFinishedPulling="2026-02-16 14:56:36.428320011 +0000 UTC m=+190.613297087" observedRunningTime="2026-02-16 14:56:37.52785928 +0000 UTC m=+191.712836356" watchObservedRunningTime="2026-02-16 14:56:37.531449729 +0000 UTC m=+191.716426815" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.571125 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jlgwg" podStartSLOduration=3.048399132 podStartE2EDuration="39.57109014s" podCreationTimestamp="2026-02-16 14:55:58 +0000 UTC" firstStartedPulling="2026-02-16 14:55:59.984955215 +0000 UTC m=+154.169932291" lastFinishedPulling="2026-02-16 14:56:36.507646223 +0000 UTC m=+190.692623299" observedRunningTime="2026-02-16 14:56:37.566055541 +0000 UTC m=+191.751032617" watchObservedRunningTime="2026-02-16 14:56:37.57109014 +0000 UTC m=+191.756067216" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.634925 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sj9bt" podStartSLOduration=4.450408079 podStartE2EDuration="42.634903975s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.860535181 +0000 UTC m=+151.045512257" lastFinishedPulling="2026-02-16 14:56:35.045031087 +0000 UTC m=+189.230008153" observedRunningTime="2026-02-16 14:56:37.631615325 +0000 UTC m=+191.816592411" watchObservedRunningTime="2026-02-16 14:56:37.634903975 +0000 UTC m=+191.819881041" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.636426 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" podStartSLOduration=5.636419347 podStartE2EDuration="5.636419347s" podCreationTimestamp="2026-02-16 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:37.593952409 +0000 UTC m=+191.778929515" watchObservedRunningTime="2026-02-16 14:56:37.636419347 +0000 UTC m=+191.821396423" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.659502 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.659566 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.735544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.774680 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" podStartSLOduration=5.7746517 podStartE2EDuration="5.7746517s" podCreationTimestamp="2026-02-16 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:37.673879667 +0000 UTC m=+191.858856753" watchObservedRunningTime="2026-02-16 14:56:37.7746517 +0000 UTC m=+191.959628776" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.057780 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.057840 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:38 crc kubenswrapper[4705]: E0216 14:56:38.308811 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd10e6ed9_d49d_45c6_8cbd_536751ec37d4.slice/crio-conmon-c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.500472 4705 generic.go:334] "Generic (PLEG): container finished" podID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerID="c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b" exitCode=0 Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.500697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerDied","Data":"c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b"} Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.501923 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.508550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.573129 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.700226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.700275 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.083706 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.083765 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.117705 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vb279" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:39 crc kubenswrapper[4705]: > Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.746347 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkkgp" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:39 crc kubenswrapper[4705]: > Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.885965 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917389 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917553 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d10e6ed9-d49d-45c6-8cbd-536751ec37d4" (UID: "d10e6ed9-d49d-45c6-8cbd-536751ec37d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.918867 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.943750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d10e6ed9-d49d-45c6-8cbd-536751ec37d4" (UID: "d10e6ed9-d49d-45c6-8cbd-536751ec37d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.020169 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.130230 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jlgwg" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:40 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:40 crc kubenswrapper[4705]: > Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.513296 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.514147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerDied","Data":"7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0"} Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.514258 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0" Feb 16 14:56:41 crc kubenswrapper[4705]: I0216 14:56:41.520335 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" exitCode=0 Feb 16 14:56:41 crc kubenswrapper[4705]: I0216 14:56:41.520405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5"} Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.222353 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:43 crc kubenswrapper[4705]: E0216 14:56:43.225218 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225236 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225343 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225852 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.230184 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.231214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.231297 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.379832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380136 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.401626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.538962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerStarted","Data":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.544354 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.029636 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:44 crc kubenswrapper[4705]: W0216 14:56:44.050662 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6b45f345_45b8_4e21_a4da_46e4d43e429e.slice/crio-87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014 WatchSource:0}: Error finding container 87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014: Status 404 returned error can't find the container with id 87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014 Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.544675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerStarted","Data":"8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc"} Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.545077 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerStarted","Data":"87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014"} Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.567493 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.567468103 podStartE2EDuration="1.567468103s" podCreationTimestamp="2026-02-16 14:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:44.562478641 +0000 UTC m=+198.747455727" watchObservedRunningTime="2026-02-16 14:56:44.567468103 +0000 UTC m=+198.752445199" Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.588381 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngfnt" podStartSLOduration=3.489818689 podStartE2EDuration="49.588350073s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.86667902 +0000 UTC m=+151.051656106" lastFinishedPulling="2026-02-16 14:56:42.965210404 +0000 UTC m=+197.150187490" observedRunningTime="2026-02-16 14:56:44.587464698 +0000 UTC m=+198.772441794" watchObservedRunningTime="2026-02-16 14:56:44.588350073 +0000 UTC m=+198.773327149" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.614887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.660873 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.660958 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.718005 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.860721 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.860792 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.903045 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.065714 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.065791 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.114802 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.629026 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.632293 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.156026 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.175044 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.229994 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.570855 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bw88w" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" containerID="cri-o://bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" gracePeriod=2 Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.763819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.826963 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.063182 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.136531 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176025 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176145 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176311 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.179848 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities" (OuterVolumeSpecName: "utilities") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.185579 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65" (OuterVolumeSpecName: "kube-api-access-ntm65") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "kube-api-access-ntm65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.185815 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.265861 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279266 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279296 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279317 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590082 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590189 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590296 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"84b0c4e14a3064d4d96f1f68cbab03b366c6b38944839fb2b7297a8f31d08a3b"} Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590209 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590358 4705 scope.go:117] "RemoveContainer" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.623926 4705 scope.go:117] "RemoveContainer" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.641877 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.647533 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.668749 4705 scope.go:117] "RemoveContainer" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.684200 4705 scope.go:117] "RemoveContainer" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.684714 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": container with ID starting with bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d not found: ID does not exist" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.684869 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} err="failed to get container status \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": rpc error: code = NotFound desc = could not find container \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": container with ID starting with bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d not found: ID does not exist" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685030 4705 scope.go:117] "RemoveContainer" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.685535 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": container with ID starting with fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46 not found: ID does not exist" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685579 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46"} err="failed to get container status \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": rpc error: code = NotFound desc = could not find container \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": container with ID starting with fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46 not found: ID does not exist" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685625 4705 scope.go:117] "RemoveContainer" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.691134 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": container with ID starting with 2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176 not found: ID does not exist" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.691205 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176"} err="failed to get container status \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": rpc error: code = NotFound desc = could not find container \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": container with ID starting with 2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176 not found: ID does not exist" Feb 16 14:56:50 crc kubenswrapper[4705]: I0216 14:56:50.434776 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" path="/var/lib/kubelet/pods/37d84ef8-6e1f-4126-8356-189afb52b629/volumes" Feb 16 14:56:51 crc kubenswrapper[4705]: I0216 14:56:51.541968 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:51 crc kubenswrapper[4705]: I0216 14:56:51.542420 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vb279" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" containerID="cri-o://6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" gracePeriod=2 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.118540 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.224576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.225112 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.225302 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.227778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities" (OuterVolumeSpecName: "utilities") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.237829 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv" (OuterVolumeSpecName: "kube-api-access-8kmkv") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "kube-api-access-8kmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.286151 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327675 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327736 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327753 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.543008 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.543346 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jlgwg" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" containerID="cri-o://9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" gracePeriod=2 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.615857 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" exitCode=0 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616397 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413"} Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616421 4705 scope.go:117] "RemoveContainer" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616516 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.647765 4705 scope.go:117] "RemoveContainer" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.649999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.667534 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.671786 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.672032 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" containerID="cri-o://6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" gracePeriod=30 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.710525 4705 scope.go:117] "RemoveContainer" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.710659 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.711151 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" containerID="cri-o://6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" gracePeriod=30 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.855688 4705 scope.go:117] "RemoveContainer" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.856171 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": container with ID starting with 6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d not found: ID does not exist" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856214 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} err="failed to get container status \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": rpc error: code = NotFound desc = could not find container \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": container with ID starting with 6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856237 4705 scope.go:117] "RemoveContainer" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.856927 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": container with ID starting with a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507 not found: ID does not exist" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856966 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507"} err="failed to get container status \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": rpc error: code = NotFound desc = could not find container \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": container with ID starting with a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856995 4705 scope.go:117] "RemoveContainer" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.857555 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": container with ID starting with d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826 not found: ID does not exist" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.857575 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} err="failed to get container status \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": rpc error: code = NotFound desc = could not find container \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": container with ID starting with d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.053536 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143520 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143663 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143685 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.145193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities" (OuterVolumeSpecName: "utilities") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.150314 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr" (OuterVolumeSpecName: "kube-api-access-hjsqr") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "kube-api-access-hjsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.245284 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.245331 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.252907 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.295616 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.302443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346476 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346524 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346675 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346769 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346794 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347809 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347869 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348253 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348271 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348461 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config" (OuterVolumeSpecName: "config") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349335 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config" (OuterVolumeSpecName: "config") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349403 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349420 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq" (OuterVolumeSpecName: "kube-api-access-jkncq") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "kube-api-access-jkncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.350049 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.350830 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f" (OuterVolumeSpecName: "kube-api-access-gg54f") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "kube-api-access-gg54f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450145 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450210 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450233 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450253 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450272 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450290 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450306 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450322 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629417 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629459 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.630657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629522 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.630719 4705 scope.go:117] "RemoveContainer" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633218 4705 generic.go:334] "Generic (PLEG): container finished" podID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633279 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerDied","Data":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633289 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerDied","Data":"c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.636943 4705 generic.go:334] "Generic (PLEG): container finished" podID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637043 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerDied","Data":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerDied","Data":"1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637043 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.668504 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.686256 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.689581 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.690475 4705 scope.go:117] "RemoveContainer" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.691557 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.708833 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.712242 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.724243 4705 scope.go:117] "RemoveContainer" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.751983 4705 scope.go:117] "RemoveContainer" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.752751 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": container with ID starting with 9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234 not found: ID does not exist" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.752825 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} err="failed to get container status \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": rpc error: code = NotFound desc = could not find container \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": container with ID starting with 9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.752882 4705 scope.go:117] "RemoveContainer" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.753504 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": container with ID starting with 22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130 not found: ID does not exist" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.753582 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130"} err="failed to get container status \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": rpc error: code = NotFound desc = could not find container \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": container with ID starting with 22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.753631 4705 scope.go:117] "RemoveContainer" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.754218 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": container with ID starting with fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4 not found: ID does not exist" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.754259 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4"} err="failed to get container status \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": rpc error: code = NotFound desc = could not find container \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": container with ID starting with fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.754288 4705 scope.go:117] "RemoveContainer" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.774178 4705 scope.go:117] "RemoveContainer" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.774949 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": container with ID starting with 6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48 not found: ID does not exist" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.775021 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} err="failed to get container status \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": rpc error: code = NotFound desc = could not find container \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": container with ID starting with 6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.775070 4705 scope.go:117] "RemoveContainer" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.791012 4705 scope.go:117] "RemoveContainer" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.791632 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": container with ID starting with 6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c not found: ID does not exist" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.791682 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} err="failed to get container status \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": rpc error: code = NotFound desc = could not find container \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": container with ID starting with 6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c not found: ID does not exist" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302482 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302922 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302950 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302964 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302980 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302992 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303018 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303036 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303054 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303067 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303081 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303094 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303117 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303130 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303146 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303160 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303185 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303199 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303215 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303228 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303244 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303257 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303471 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303504 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303525 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303547 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303564 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.304253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.308579 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.309065 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310230 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310712 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311006 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311835 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311982 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.317944 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.318481 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.318518 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.320872 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.322568 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.323179 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.327781 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.329044 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.350547 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.367712 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.367799 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368336 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368404 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368510 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368767 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.429328 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" path="/var/lib/kubelet/pods/082d4064-6b1c-4a39-9839-3466e7a1ce3a/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.430587 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" path="/var/lib/kubelet/pods/0ee875e7-6eab-4220-a29d-316c22f70703/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.431575 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" path="/var/lib/kubelet/pods/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.433210 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d685f5-d57e-434b-93c8-727195de9479" path="/var/lib/kubelet/pods/c6d685f5-d57e-434b-93c8-727195de9479/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470777 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472805 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473892 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.474241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.475276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.475277 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.479272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.479421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.496632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.501851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.649918 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.665383 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.141315 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:55 crc kubenswrapper[4705]: W0216 14:56:55.151594 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c8a460_a52e_4669_bce1_28110d7d1d84.slice/crio-2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796 WatchSource:0}: Error finding container 2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796: Status 404 returned error can't find the container with id 2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796 Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.165131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:55 crc kubenswrapper[4705]: W0216 14:56:55.183594 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd37fdcf5_d38d_4ee6_a395_67c634cc101d.slice/crio-11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed WatchSource:0}: Error finding container 11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed: Status 404 returned error can't find the container with id 11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.692561 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerStarted","Data":"d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.693098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerStarted","Data":"11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.693124 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.701730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerStarted","Data":"4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.701797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerStarted","Data":"2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.704820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.714232 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" podStartSLOduration=3.714199205 podStartE2EDuration="3.714199205s" podCreationTimestamp="2026-02-16 14:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:55.712199618 +0000 UTC m=+209.897176694" watchObservedRunningTime="2026-02-16 14:56:55.714199205 +0000 UTC m=+209.899176291" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.902237 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.925853 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" podStartSLOduration=3.925821789 podStartE2EDuration="3.925821789s" podCreationTimestamp="2026-02-16 14:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:55.761722649 +0000 UTC m=+209.946699725" watchObservedRunningTime="2026-02-16 14:56:55.925821789 +0000 UTC m=+210.110798875" Feb 16 14:56:56 crc kubenswrapper[4705]: I0216 14:56:56.721982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:56 crc kubenswrapper[4705]: I0216 14:56:56.731670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:58 crc kubenswrapper[4705]: I0216 14:56:58.951478 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:58 crc kubenswrapper[4705]: I0216 14:56:58.954559 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" containerID="cri-o://d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" gracePeriod=2 Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.528179 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662071 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662142 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.663089 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities" (OuterVolumeSpecName: "utilities") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.668490 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9" (OuterVolumeSpecName: "kube-api-access-hr5j9") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "kube-api-access-hr5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.707436 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746431 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" exitCode=0 Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746489 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938"} Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746564 4705 scope.go:117] "RemoveContainer" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746711 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763507 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763538 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763548 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.778603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.783044 4705 scope.go:117] "RemoveContainer" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.786156 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.798778 4705 scope.go:117] "RemoveContainer" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.812902 4705 scope.go:117] "RemoveContainer" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.813600 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": container with ID starting with d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8 not found: ID does not exist" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.813659 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} err="failed to get container status \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": rpc error: code = NotFound desc = could not find container \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": container with ID starting with d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8 not found: ID does not exist" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.813701 4705 scope.go:117] "RemoveContainer" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.814111 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": container with ID starting with e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5 not found: ID does not exist" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814177 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5"} err="failed to get container status \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": rpc error: code = NotFound desc = could not find container \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": container with ID starting with e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5 not found: ID does not exist" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814221 4705 scope.go:117] "RemoveContainer" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.814628 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": container with ID starting with 79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802 not found: ID does not exist" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802"} err="failed to get container status \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": rpc error: code = NotFound desc = could not find container \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": container with ID starting with 79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802 not found: ID does not exist" Feb 16 14:57:00 crc kubenswrapper[4705]: I0216 14:57:00.425790 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" path="/var/lib/kubelet/pods/1f1a76ff-82ae-4dac-88d2-20e6858835e3/volumes" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.556977 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" containerID="cri-o://1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" gracePeriod=15 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688536 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688618 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688693 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.689504 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.689580 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" gracePeriod=600 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.762715 4705 generic.go:334] "Generic (PLEG): container finished" podID="100a207c-bfcf-42aa-8233-f760df5a3888" containerID="1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" exitCode=0 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.762764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerDied","Data":"1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.034828 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093599 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093637 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093659 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093689 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093748 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094115 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094157 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094194 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094724 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094721 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094776 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094854 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.095069 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096251 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096268 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096278 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096287 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096298 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101453 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg" (OuterVolumeSpecName: "kube-api-access-r92bg") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "kube-api-access-r92bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101588 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102291 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102895 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102972 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.103309 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.105591 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.109014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197088 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197137 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197151 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197162 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197173 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197184 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197198 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197212 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197222 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerDied","Data":"fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770493 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770643 4705 scope.go:117] "RemoveContainer" containerID="1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772569 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" exitCode=0 Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.802290 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.807529 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:57:04 crc kubenswrapper[4705]: I0216 14:57:04.448897 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" path="/var/lib/kubelet/pods/100a207c-bfcf-42aa-8233-f760df5a3888/volumes" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.311464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312099 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-content" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312115 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-content" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312140 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312147 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312166 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-utilities" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312175 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-utilities" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312185 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312195 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312318 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312338 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317150 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317335 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317306 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317390 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.318689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319097 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319610 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319430 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.331681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.332650 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.332992 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.342120 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454745 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454808 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454838 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455113 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455137 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455284 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.556944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557030 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557049 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562660 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562857 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563122 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.565635 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.566388 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.568645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.570602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.572797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.578734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582265 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.583688 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.585921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.627927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.028448 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:12 crc kubenswrapper[4705]: W0216 14:57:12.034265 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f077276_54eb_47be_a85c_46b0942e1bb6.slice/crio-dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915 WatchSource:0}: Error finding container dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915: Status 404 returned error can't find the container with id dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.610391 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.611144 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" containerID="cri-o://d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" gracePeriod=30 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.703659 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.704170 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" containerID="cri-o://4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" gracePeriod=30 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.840919 4705 generic.go:334] "Generic (PLEG): container finished" podID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerID="d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" exitCode=0 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.840995 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerDied","Data":"d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.842768 4705 generic.go:334] "Generic (PLEG): container finished" podID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerID="4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" exitCode=0 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.842860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerDied","Data":"4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.844584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" event={"ID":"7f077276-54eb-47be-a85c-46b0942e1bb6","Type":"ContainerStarted","Data":"7907afb5950b10f1cf524c738d4d96e0cb00c8e64bc7e97049284e9d20c7ccea"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.844615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" event={"ID":"7f077276-54eb-47be-a85c-46b0942e1bb6","Type":"ContainerStarted","Data":"dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.845066 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.857071 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.898516 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" podStartSLOduration=36.898499644 podStartE2EDuration="36.898499644s" podCreationTimestamp="2026-02-16 14:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:12.873799545 +0000 UTC m=+227.058776611" watchObservedRunningTime="2026-02-16 14:57:12.898499644 +0000 UTC m=+227.083476710" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.204435 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.209638 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284724 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284773 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284801 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284871 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284908 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284939 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286210 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca" (OuterVolumeSpecName: "client-ca") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config" (OuterVolumeSpecName: "config") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286599 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config" (OuterVolumeSpecName: "config") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca" (OuterVolumeSpecName: "client-ca") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.291277 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87" (OuterVolumeSpecName: "kube-api-access-64c87") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "kube-api-access-64c87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.291932 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt" (OuterVolumeSpecName: "kube-api-access-fz7vt") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "kube-api-access-fz7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.292651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.293469 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.385994 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386034 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386044 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386053 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386063 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386076 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386084 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386094 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386102 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerDied","Data":"2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796"} Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851777 4705 scope.go:117] "RemoveContainer" containerID="4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851779 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.854678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerDied","Data":"11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed"} Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.854823 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.877223 4705 scope.go:117] "RemoveContainer" containerID="d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.886950 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.894229 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.917666 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.921216 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317063 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:14 crc kubenswrapper[4705]: E0216 14:57:14.317441 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: E0216 14:57:14.317469 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317479 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317585 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317602 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.318138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320562 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320708 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320837 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.321006 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.321095 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.322875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.323166 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.324652 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.335848 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.350384 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.351206 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.351454 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.352851 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.354496 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.359968 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.361392 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.366149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.399881 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400465 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400927 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.401038 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.401101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.430620 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" path="/var/lib/kubelet/pods/47c8a460-a52e-4669-bce1-28110d7d1d84/volumes" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.431729 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" path="/var/lib/kubelet/pods/d37fdcf5-d38d-4ee6-a395-67c634cc101d/volumes" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503758 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503857 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.504394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.505083 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.506230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.506581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.507216 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.512063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.518922 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.521870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.527830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.649751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.650927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.872459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: W0216 14:57:14.881723 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ceaa67_4f36_4622_88ab_c2d5413c57f6.slice/crio-18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24 WatchSource:0}: Error finding container 18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24: Status 404 returned error can't find the container with id 18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24 Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.932170 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.871714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" event={"ID":"10d74ea2-e93d-4c5b-b659-61bce2500a4d","Type":"ContainerStarted","Data":"f7756b3b41b2751a91ed206e5bfc85f605958d1fa290c9e840cd6d51cfa383d1"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.872346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" event={"ID":"10d74ea2-e93d-4c5b-b659-61bce2500a4d","Type":"ContainerStarted","Data":"f10b2d2413920cfffbdff891cf9134716df1497e964e0d119be7b52d0fe2a774"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.872468 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874100 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" event={"ID":"f2ceaa67-4f36-4622-88ab-c2d5413c57f6","Type":"ContainerStarted","Data":"40b2b93e397cda8a5945d848024a56a0203fe4fb31672e3655442a0fe5eba83b"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" event={"ID":"f2ceaa67-4f36-4622-88ab-c2d5413c57f6","Type":"ContainerStarted","Data":"18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874378 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.878746 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.880835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.896919 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" podStartSLOduration=3.896903311 podStartE2EDuration="3.896903311s" podCreationTimestamp="2026-02-16 14:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:15.893241648 +0000 UTC m=+230.078218734" watchObservedRunningTime="2026-02-16 14:57:15.896903311 +0000 UTC m=+230.081880397" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.919676 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" podStartSLOduration=3.919655535 podStartE2EDuration="3.919655535s" podCreationTimestamp="2026-02-16 14:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:15.917080472 +0000 UTC m=+230.102057558" watchObservedRunningTime="2026-02-16 14:57:15.919655535 +0000 UTC m=+230.104632611" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.175249 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177191 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177234 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177335 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177737 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177800 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177863 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177906 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177939 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.180904 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.182969 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183049 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183073 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183087 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183115 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183140 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183158 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183173 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183200 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183213 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183230 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183243 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183261 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183273 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183552 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183579 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183604 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183620 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183645 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183666 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.319978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320486 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320759 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320913 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422419 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422421 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422396 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422529 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.929734 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.931232 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932111 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932138 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932149 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932158 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" exitCode=2 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932229 4705 scope.go:117] "RemoveContainer" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.934350 4705 generic.go:334] "Generic (PLEG): container finished" podID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerID="8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.934395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerDied","Data":"8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc"} Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.935287 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:23 crc kubenswrapper[4705]: I0216 14:57:23.942251 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.452766 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.454525 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550208 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock" (OuterVolumeSpecName: "var-lock") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550908 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.557971 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651330 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651723 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651732 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.655870 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.656748 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.657195 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.657453 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752273 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752300 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752443 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752583 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752597 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752607 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.854143 4705 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.974217 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.978828 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" exitCode=0 Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.978968 4705 scope.go:117] "RemoveContainer" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.979276 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerDied","Data":"87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014"} Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982731 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982737 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.002351 4705 scope.go:117] "RemoveContainer" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.015545 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.016109 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.019811 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.020289 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.025096 4705 scope.go:117] "RemoveContainer" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.040511 4705 scope.go:117] "RemoveContainer" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.056608 4705 scope.go:117] "RemoveContainer" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.072337 4705 scope.go:117] "RemoveContainer" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.095694 4705 scope.go:117] "RemoveContainer" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.096469 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": container with ID starting with e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852 not found: ID does not exist" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.096574 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852"} err="failed to get container status \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": rpc error: code = NotFound desc = could not find container \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": container with ID starting with e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.096678 4705 scope.go:117] "RemoveContainer" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.097101 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": container with ID starting with d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6 not found: ID does not exist" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6"} err="failed to get container status \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": rpc error: code = NotFound desc = could not find container \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": container with ID starting with d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097195 4705 scope.go:117] "RemoveContainer" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.097701 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": container with ID starting with c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373 not found: ID does not exist" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097794 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373"} err="failed to get container status \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": rpc error: code = NotFound desc = could not find container \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": container with ID starting with c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097866 4705 scope.go:117] "RemoveContainer" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.098307 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": container with ID starting with 7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9 not found: ID does not exist" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.098392 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9"} err="failed to get container status \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": rpc error: code = NotFound desc = could not find container \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": container with ID starting with 7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.098418 4705 scope.go:117] "RemoveContainer" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.098909 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": container with ID starting with 56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1 not found: ID does not exist" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099039 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1"} err="failed to get container status \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": rpc error: code = NotFound desc = could not find container \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": container with ID starting with 56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099173 4705 scope.go:117] "RemoveContainer" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.099598 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": container with ID starting with f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9 not found: ID does not exist" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099635 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9"} err="failed to get container status \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": rpc error: code = NotFound desc = could not find container \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": container with ID starting with f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9 not found: ID does not exist" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.424419 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.425620 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.432570 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.718215 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.719251 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.719812 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.720546 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.720895 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.720937 4705 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.721317 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.922238 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.244735 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:27 crc kubenswrapper[4705]: I0216 14:57:27.245126 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:27 crc kubenswrapper[4705]: W0216 14:57:27.263883 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5 WatchSource:0}: Error finding container a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5: Status 404 returned error can't find the container with id a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5 Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.267453 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1fd55651078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,LastTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.326926 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.001582 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c"} Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.002132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5"} Feb 16 14:57:28 crc kubenswrapper[4705]: E0216 14:57:28.002897 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.002989 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:28 crc kubenswrapper[4705]: E0216 14:57:28.129285 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Feb 16 14:57:29 crc kubenswrapper[4705]: E0216 14:57:29.731249 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Feb 16 14:57:30 crc kubenswrapper[4705]: E0216 14:57:30.032916 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1fd55651078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,LastTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.418899 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.419498 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440009 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440041 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:32 crc kubenswrapper[4705]: E0216 14:57:32.440311 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: E0216 14:57:32.932881 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="6.4s" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036269 4705 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7e932b61fe25a189a4870f79d3277397e7e7646a88406dff42273f87ffe56204" exitCode=0 Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036323 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7e932b61fe25a189a4870f79d3277397e7e7646a88406dff42273f87ffe56204"} Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5392b6374769c9d76a1471cc24a055ec975e57d9ba591533996058c3caa92bee"} Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036749 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036779 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.037008 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:33 crc kubenswrapper[4705]: E0216 14:57:33.037149 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.054627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b01b038538dec6d83bad14b3cefe007b6a9bcd90e4678d675c93fd0baaa9744"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"272a67bc66194163d69fd3ad217ce215708b666adffad8ef4256d1a1abd0d19c"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056184 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f666f38a0b77bb446295d2fd9b5790757f35f127ca82ceac71ac5f41f356bdde"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b846ac25d2bd21e73f2aca71805cff5f016db071b9bb6b7e3bbfa624db1f5bf"} Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.070805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b80e563543ae5ed80b7a022a4d8081deb175876726a96243e8a0084ff1f2074"} Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.071774 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.072028 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.072158 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.088527 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.089726 4705 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9" exitCode=1 Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.089804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9"} Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.090986 4705 scope.go:117] "RemoveContainer" containerID="a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.441161 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.441238 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.449318 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.583597 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:38 crc kubenswrapper[4705]: I0216 14:57:38.136688 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:38 crc kubenswrapper[4705]: I0216 14:57:38.137261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb9d3cb6732a77878233522dacb3ee3c5d14e1c4ab14cc9b0d5f49c55a000db0"} Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.086737 4705 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.138693 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.150283 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.150351 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.161444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.171919 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.155011 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.155068 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.159618 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.244055 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.249010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.773554 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.775614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.931449 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.019624 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.023670 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.151402 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.191706 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.193072 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.197695 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.200903 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201075 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201506 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201536 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.207097 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.248785 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.250999 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.251157 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=7.251119761 podStartE2EDuration="7.251119761s" podCreationTimestamp="2026-02-16 14:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:47.249183746 +0000 UTC m=+261.434160852" watchObservedRunningTime="2026-02-16 14:57:47.251119761 +0000 UTC m=+261.436096877" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.292941 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.293350 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.463961 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.922604 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.959880 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.985568 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.993328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.047066 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.098576 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.235098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.343486 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.413939 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.415843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.435969 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.455364 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.523835 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.613554 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.641207 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.665884 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.725179 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.725817 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.819211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.827903 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.130822 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.170238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.359575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.402419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.420831 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.431708 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.510804 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.654500 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.686800 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.971458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.162475 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.195921 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.237222 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.237526 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" gracePeriod=5 Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.302911 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.438303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.449821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.450839 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.598519 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.638511 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.712149 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.732302 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.780414 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.919780 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.955002 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.003301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.019002 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.049874 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.178566 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.202132 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.277059 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.327197 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.371961 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.408653 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.443606 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.676153 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.791960 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.846548 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.862975 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.875036 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.065656 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.500761 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.532875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.739761 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.785053 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.085445 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.418194 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.738559 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.933883 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.949062 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.023804 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.220058 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.425323 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.425336 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.440240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.587641 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.743676 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.170642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.215053 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.316573 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.533640 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.637307 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.829991 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.830502 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862800 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862897 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863046 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863162 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863185 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863336 4705 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863350 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863359 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863384 4705 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.874119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.964585 4705 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.039539 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.093622 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.251951 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252062 4705 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" exitCode=137 Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252157 4705 scope.go:117] "RemoveContainer" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252436 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.275268 4705 scope.go:117] "RemoveContainer" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: E0216 14:57:56.275761 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": container with ID starting with 75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c not found: ID does not exist" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.275798 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c"} err="failed to get container status \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": rpc error: code = NotFound desc = could not find container \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": container with ID starting with 75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c not found: ID does not exist" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.370399 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.427196 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.435261 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.447049 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.505030 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.568871 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.600534 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.625890 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.634557 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.757755 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.935636 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.182631 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.253757 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.279109 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.280990 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.290530 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.336656 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.375645 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.488923 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.584510 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.631852 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.719020 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.796837 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.807428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.819148 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.905683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.917222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.960711 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.158793 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.180286 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.213799 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.320198 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.340518 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.407464 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.520890 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.594351 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.596109 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.614604 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.659254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.684758 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.715630 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.852438 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.010237 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.111089 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.207662 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.333980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.453714 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.513754 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.520977 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.664931 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.675567 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.716839 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.791959 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.817889 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.840664 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.946947 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.991698 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.002636 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.015976 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.100659 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.101077 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.136786 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.213142 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.318551 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.371813 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.419942 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.489763 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.563775 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.563872 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.613973 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.620276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.661238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.747472 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.748617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.870426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.882280 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.916401 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.055155 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.058872 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.066832 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.157912 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.194897 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.459451 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.550034 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.569477 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.602703 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.621138 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.674137 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.784787 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.800000 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.804244 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.869517 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.954736 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.041925 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.046839 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.348953 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.390679 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.422099 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.459538 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.494351 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.522106 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.552343 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.629354 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.675744 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.835710 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.915576 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.047335 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.119204 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.240895 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.265947 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.312983 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.372590 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.421159 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.582303 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.678552 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.741641 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.774822 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.989927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.105015 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.250680 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.395295 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.550482 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.662318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.682980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.711492 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.833905 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.989821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.030773 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.242421 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.562974 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.610352 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.665977 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.873772 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.985908 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.086026 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.095400 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.395160 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.455689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.518690 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.520095 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.536577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.584107 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.608163 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.695314 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.779895 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.811727 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.923725 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.062429 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.148980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.211735 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.435942 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.534852 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.789334 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.843276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.844719 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.881456 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:58:08 crc kubenswrapper[4705]: I0216 14:58:08.554736 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:58:09 crc kubenswrapper[4705]: I0216 14:58:09.203816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:58:10 crc kubenswrapper[4705]: I0216 14:58:10.006447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.776187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.777087 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sj9bt" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" containerID="cri-o://d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.781077 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.781287 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvxpr" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" containerID="cri-o://d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.787435 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.787637 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" containerID="cri-o://004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.811278 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.811569 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gmh5s" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" containerID="cri-o://3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.827947 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.828498 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkkgp" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" containerID="cri-o://44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.850911 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:11 crc kubenswrapper[4705]: E0216 14:58:11.851490 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851504 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: E0216 14:58:11.851518 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851525 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851617 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851628 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851988 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.870619 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918725 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918780 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020889 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.023905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.028947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.045810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.189461 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.274651 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.324609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.324733 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.325036 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.326484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.331485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd" (OuterVolumeSpecName: "kube-api-access-qc5bd") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "kube-api-access-qc5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.336519 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383758 4705 generic.go:334] "Generic (PLEG): container finished" podID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerDied","Data":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383859 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383894 4705 scope.go:117] "RemoveContainer" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383878 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerDied","Data":"faa1e5018382734db35e1205c39088b34faea391ec6e62672b88da102016cb47"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.390251 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.390330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.393964 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.394044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.397865 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.397910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.421512 4705 scope.go:117] "RemoveContainer" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436340 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436401 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436411 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: E0216 14:58:12.436718 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": container with ID starting with 004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61 not found: ID does not exist" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436749 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} err="failed to get container status \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": rpc error: code = NotFound desc = could not find container \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": container with ID starting with 004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61 not found: ID does not exist" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.438209 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.452006 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.452039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.463317 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.476319 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.484562 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.486410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.509430 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559790 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560062 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560090 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560167 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560189 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7" (OuterVolumeSpecName: "kube-api-access-x9mn7") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "kube-api-access-x9mn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564802 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities" (OuterVolumeSpecName: "utilities") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564799 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw" (OuterVolumeSpecName: "kube-api-access-lfjqw") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "kube-api-access-lfjqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities" (OuterVolumeSpecName: "utilities") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.565930 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82" (OuterVolumeSpecName: "kube-api-access-hmb82") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "kube-api-access-hmb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.567118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities" (OuterVolumeSpecName: "utilities") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.617166 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.631422 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661545 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661592 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661630 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661925 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661944 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661953 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661961 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661970 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661978 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661986 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661994 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.662484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities" (OuterVolumeSpecName: "utilities") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.664125 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w" (OuterVolumeSpecName: "kube-api-access-7bn7w") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "kube-api-access-7bn7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.708836 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.726391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762599 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762634 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762642 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762689 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.764514 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446089 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"3d2f0059d40b4313cb2192bb0c8318a3e59e5de2da0badc178590ca35c5bf347"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446586 4705 scope.go:117] "RemoveContainer" containerID="d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.449836 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.449966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.456236 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"5ca975ac41d20405951f16e100085714e84618ea7435589dc42061daef0e3c0d"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.456297 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.457885 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" event={"ID":"88197577-5157-4d99-9813-eb3173530b4f","Type":"ContainerStarted","Data":"38989531c1d423921e4d11207bd66d821a0d3882fbdede6a5d7ccde2f9598b95"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.457910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" event={"ID":"88197577-5157-4d99-9813-eb3173530b4f","Type":"ContainerStarted","Data":"c160938cc4a9aeb02b8fb0dcd8866dc1e6d1972641bc31e88fe4f8e47c6d676f"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.458476 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.461869 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.462411 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"9e7c06275441e0dc9753d3e97f80b0b2fa0173ed74928bf3711fd998b37c0d36"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.462648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.486855 4705 scope.go:117] "RemoveContainer" containerID="142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.495871 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" podStartSLOduration=2.495853057 podStartE2EDuration="2.495853057s" podCreationTimestamp="2026-02-16 14:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:58:13.495766984 +0000 UTC m=+287.680744090" watchObservedRunningTime="2026-02-16 14:58:13.495853057 +0000 UTC m=+287.680830133" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.527310 4705 scope.go:117] "RemoveContainer" containerID="47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.528668 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.538872 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.543495 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.549567 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.553806 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.559706 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.561427 4705 scope.go:117] "RemoveContainer" containerID="d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.564523 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.566624 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.576581 4705 scope.go:117] "RemoveContainer" containerID="73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.591481 4705 scope.go:117] "RemoveContainer" containerID="4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.605832 4705 scope.go:117] "RemoveContainer" containerID="44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.622927 4705 scope.go:117] "RemoveContainer" containerID="bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.637537 4705 scope.go:117] "RemoveContainer" containerID="2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.651513 4705 scope.go:117] "RemoveContainer" containerID="3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.666399 4705 scope.go:117] "RemoveContainer" containerID="3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.682471 4705 scope.go:117] "RemoveContainer" containerID="d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.425497 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" path="/var/lib/kubelet/pods/112518bc-4caf-44c2-8920-185e2e90cc9b/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.426072 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" path="/var/lib/kubelet/pods/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.426661 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" path="/var/lib/kubelet/pods/5621ad75-f2c2-44c8-aff8-ed4da48fc415/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.427166 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895390cd-d0f8-46da-a932-6cccd295f203" path="/var/lib/kubelet/pods/895390cd-d0f8-46da-a932-6cccd295f203/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.427742 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" path="/var/lib/kubelet/pods/c8efc871-44f0-4bbd-b639-6adaee23319a/volumes" Feb 16 14:58:26 crc kubenswrapper[4705]: I0216 14:58:26.242881 4705 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.031053 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032260 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032285 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032316 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032334 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032366 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032425 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032443 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032454 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032468 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032503 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032516 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032562 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032595 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032608 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032626 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032655 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032667 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032683 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032695 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032712 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032725 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032739 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032751 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032923 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032941 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032965 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032984 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.033002 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.033812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.036889 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.037985 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.038903 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.039674 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.040972 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.050674 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198618 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198746 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299800 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.300955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.317278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.323967 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.362915 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.888973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:44 crc kubenswrapper[4705]: I0216 14:58:44.660349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" event={"ID":"72ebc12e-e218-4611-bf0f-792c7a949828","Type":"ContainerStarted","Data":"3374959813bb2a88c4ad5f65a202394edea8c86c8f2d291e2b483c6e0ffba088"} Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.602030 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.603196 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.605918 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.613140 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.672989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" event={"ID":"72ebc12e-e218-4611-bf0f-792c7a949828","Type":"ContainerStarted","Data":"1ba5386d8196ec9b0269d25850eadefeb5d34f76dcf71c0f4115c0afc09dab84"} Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.755019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.856699 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.871884 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.919923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.390405 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" podStartSLOduration=2.362070325 podStartE2EDuration="4.390350239s" podCreationTimestamp="2026-02-16 14:58:43 +0000 UTC" firstStartedPulling="2026-02-16 14:58:43.903105679 +0000 UTC m=+318.088082765" lastFinishedPulling="2026-02-16 14:58:45.931385583 +0000 UTC m=+320.116362679" observedRunningTime="2026-02-16 14:58:46.695630914 +0000 UTC m=+320.880607990" watchObservedRunningTime="2026-02-16 14:58:47.390350239 +0000 UTC m=+321.575327345" Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.395332 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:47 crc kubenswrapper[4705]: W0216 14:58:47.397132 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e34ca0_dbbd_4076_b891_9d44df6973cc.slice/crio-11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44 WatchSource:0}: Error finding container 11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44: Status 404 returned error can't find the container with id 11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44 Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.684344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" event={"ID":"d8e34ca0-dbbd-4076-b891-9d44df6973cc","Type":"ContainerStarted","Data":"11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44"} Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.700844 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" event={"ID":"d8e34ca0-dbbd-4076-b891-9d44df6973cc","Type":"ContainerStarted","Data":"e00e7afcb86d2918efeb1ff3ccac1e146178821457c1c91ea59828d6f5be9aea"} Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.701483 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.714942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.727075 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" podStartSLOduration=2.379565092 podStartE2EDuration="3.727044115s" podCreationTimestamp="2026-02-16 14:58:46 +0000 UTC" firstStartedPulling="2026-02-16 14:58:47.401633268 +0000 UTC m=+321.586610374" lastFinishedPulling="2026-02-16 14:58:48.749112321 +0000 UTC m=+322.934089397" observedRunningTime="2026-02-16 14:58:49.719725038 +0000 UTC m=+323.904702144" watchObservedRunningTime="2026-02-16 14:58:49.727044115 +0000 UTC m=+323.912021231" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.668639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.669640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.672713 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.674393 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.674467 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.686383 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813526 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813628 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813684 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.918012 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.927006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.927960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.945022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.988208 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:51 crc kubenswrapper[4705]: I0216 14:58:51.502176 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:51 crc kubenswrapper[4705]: I0216 14:58:51.734359 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"5894ef1340f11e00d73f5114c91786a42fe5b8be889108ce8e69480fd6d351f5"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.797850 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"f089d0e7440aab03f2cc836492e4d9c838d8ed93045cdc48d6db5966e686b586"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.798341 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"af6035988030a51a23b058e69ceedbd75a510f60667ec5f36754552e903becb6"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.821595 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" podStartSLOduration=2.192105362 podStartE2EDuration="3.821567312s" podCreationTimestamp="2026-02-16 14:58:50 +0000 UTC" firstStartedPulling="2026-02-16 14:58:51.51665307 +0000 UTC m=+325.701630156" lastFinishedPulling="2026-02-16 14:58:53.14611503 +0000 UTC m=+327.331092106" observedRunningTime="2026-02-16 14:58:53.817709293 +0000 UTC m=+328.002686369" watchObservedRunningTime="2026-02-16 14:58:53.821567312 +0000 UTC m=+328.006544428" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.955720 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.957027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.958958 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.959097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.975149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035283 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035420 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.049584 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.050887 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.053262 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.053868 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.054650 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.061859 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.086848 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-6vxhj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.087896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.090927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.091165 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137200 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137264 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137330 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137385 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137444 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137543 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137615 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.138820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.144450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.146752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.153087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239078 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239159 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239382 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239415 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239438 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239530 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240607 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240771 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.241104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.247833 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.247982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.248014 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.259521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.260719 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.262236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.276358 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.365917 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.402751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.710329 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.817993 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"8c601e41340687fe672754432c425e94df05fefb0f5e324452aef3aee109cc19"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.820897 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"f208ca3e03a0a79df1c645de98fd047e83e4aada3617908aca1b033402879990"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.155459 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.157601 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.160748 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.160809 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.163910 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.164818 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.165880 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.166047 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.166577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.170267 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.181497 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257222 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257300 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257488 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257513 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257531 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257565 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.341919 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:57 crc kubenswrapper[4705]: W0216 14:58:57.347317 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b0767c1_7dc6_4c17_baa7_34f91d1f7207.slice/crio-54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d WatchSource:0}: Error finding container 54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d: Status 404 returned error can't find the container with id 54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358292 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358329 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358397 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.359915 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.360911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365613 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365701 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.366218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.366670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.367768 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.368053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.371910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.376771 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.528735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.831446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.833871 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"1be43cfe215d5c854f73e730b03a0f9be0055518089e1d063daefc3891e495f0"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.834345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"c9d6e81e00e6a49b1afaa3fbaa3c4ce51992a2410a8e7e2ae3afcce8a1821a7c"} Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.003744 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:58 crc kubenswrapper[4705]: W0216 14:58:58.012765 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8934da22_3ea4_4b0b_be02_6062165cdc7b.slice/crio-7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d WatchSource:0}: Error finding container 7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d: Status 404 returned error can't find the container with id 7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.038440 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.048439 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.052182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.052988 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bf60ue0kt7k38" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053200 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053349 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053456 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.054122 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.054303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072200 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072332 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072409 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073570 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183630 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183718 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.191976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.205720 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.211001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.211662 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.214230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.214520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.216104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.218827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.374513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.841440 4705 generic.go:334] "Generic (PLEG): container finished" podID="5b3841cd-a0f0-481c-9a3e-4bee8df62db2" containerID="318162a536ab2026b02263dfe1cda4a0c5e93bbfdbcd47bdcd459fc7d0b8d4f9" exitCode=0 Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.841484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerDied","Data":"318162a536ab2026b02263dfe1cda4a0c5e93bbfdbcd47bdcd459fc7d0b8d4f9"} Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.842864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.575942 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"426a9d8d5c29662c9db0b6b3816f93e8090cb0f6179947489103d38c8f0a334d"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852213 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"d7749e96570da9e34f747fa92618268b8f4d665f615634cb6452db7542c7d07a"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852224 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"0e48fbee67bf0a20b366db6b56a564eded959a7c79f5d6541b23114c9a55d2b8"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.856833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"b233f064d4b94b1967e7b91880bed2c0d0dbd1fe86ddac4b3bc6e8be441b80a1"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.856904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"4542cec339befec468e9da9511dc4622bc961f47f402e263d321d6d438fca981"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.859130 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"1dd5baf26a8eaf740b8c98463ec34682b8be890fdc1f1129358afe0010c97a8a"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.864936 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"f38a5533d5f5de9e5f56eb17dbfdb6a069ca8b6b1f943abcef250cac3c2a5c53"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.882464 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" podStartSLOduration=2.104681522 podStartE2EDuration="3.882439965s" podCreationTimestamp="2026-02-16 14:58:56 +0000 UTC" firstStartedPulling="2026-02-16 14:58:57.350219422 +0000 UTC m=+331.535196498" lastFinishedPulling="2026-02-16 14:58:59.127977865 +0000 UTC m=+333.312954941" observedRunningTime="2026-02-16 14:58:59.871557929 +0000 UTC m=+334.056535005" watchObservedRunningTime="2026-02-16 14:58:59.882439965 +0000 UTC m=+334.067417041" Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.899292 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" podStartSLOduration=2.83198908 podStartE2EDuration="4.89927352s" podCreationTimestamp="2026-02-16 14:58:55 +0000 UTC" firstStartedPulling="2026-02-16 14:58:57.040488305 +0000 UTC m=+331.225465381" lastFinishedPulling="2026-02-16 14:58:59.107772735 +0000 UTC m=+333.292749821" observedRunningTime="2026-02-16 14:58:59.893490887 +0000 UTC m=+334.078467973" watchObservedRunningTime="2026-02-16 14:58:59.89927352 +0000 UTC m=+334.084250596" Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.914718 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-6vxhj" podStartSLOduration=2.5504684749999997 podStartE2EDuration="3.914695675s" podCreationTimestamp="2026-02-16 14:58:56 +0000 UTC" firstStartedPulling="2026-02-16 14:58:56.426474726 +0000 UTC m=+330.611451802" lastFinishedPulling="2026-02-16 14:58:57.790701926 +0000 UTC m=+331.975679002" observedRunningTime="2026-02-16 14:58:59.912254426 +0000 UTC m=+334.097231542" watchObservedRunningTime="2026-02-16 14:58:59.914695675 +0000 UTC m=+334.099672751" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.236155 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.237583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241258 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241431 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241494 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241569 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ecjvii5sj4rci" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.248657 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.255585 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335183 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335407 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335438 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335613 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.438134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.438818 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.439329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.445698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.453310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.455829 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.457296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.600000 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.684325 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.684398 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.878598 4705 generic.go:334] "Generic (PLEG): container finished" podID="8934da22-3ea4-4b0b-be02-6062165cdc7b" containerID="78e9a508715e358142a62884bc384aae0d71c81187121bcab83abe45207704c4" exitCode=0 Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.878672 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerDied","Data":"78e9a508715e358142a62884bc384aae0d71c81187121bcab83abe45207704c4"} Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.992168 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.992919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.995614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.996912 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.009833 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.024769 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.049592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.151029 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.157357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.319254 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.434719 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.457805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.462402 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.462689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463106 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463921 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.464412 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.465842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.467305 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.467671 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a4a5ql6fgckom" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.468030 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.477145 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.478147 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.480583 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560976 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561050 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561090 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561255 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561276 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561291 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.663489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.663543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664095 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664120 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664147 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664272 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664307 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665679 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665819 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.666430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.667460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.670430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.671519 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.671735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.672472 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.673190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.673731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.675971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.676744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.678183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.680059 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.681394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.682155 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.693313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: W0216 14:59:02.743996 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830c9eb2_2fd1_4213_9067_d1df432bc535.slice/crio-c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359 WatchSource:0}: Error finding container c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359: Status 404 returned error can't find the container with id c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359 Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.788945 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.889917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" event={"ID":"830c9eb2-2fd1-4213-9067-d1df432bc535","Type":"ContainerStarted","Data":"c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.005013 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:03 crc kubenswrapper[4705]: W0216 14:59:03.036561 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b846d4f_0232_4904_8b2c_26faa7b2a55d.slice/crio-8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8 WatchSource:0}: Error finding container 8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8: Status 404 returned error can't find the container with id 8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.109279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:03 crc kubenswrapper[4705]: W0216 14:59:03.129412 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8232e0b2_8d33_4cf9_a367_5c1dc59b8629.slice/crio-e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40 WatchSource:0}: Error finding container e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40: Status 404 returned error can't find the container with id e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.909726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" event={"ID":"9b846d4f-0232-4904-8b2c-26faa7b2a55d","Type":"ContainerStarted","Data":"8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.913979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"3349fdd1e3e817add8ae172e707bcfa41a80ee06f20ceb9ecdb59e4a4034499d"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.914065 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"5a6ac6ca2fc95c150224da3d41536f56bfac50db1519a4acfff69219d12973be"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.914082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"77338a05dd068e237d5a9fd67b6fcee42963e4d99d4aa115078ed787618ed911"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916477 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="518c82b812147b0274e50839da693ef12ca4cec3f8311e89b254aeb0fdcfffba" exitCode=0 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916516 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"518c82b812147b0274e50839da693ef12ca4cec3f8311e89b254aeb0fdcfffba"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"34a7f3b720b05a75927d551d31375e6e4fd2b40396c81a83bf130ddd06e8bd47"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"840c248a162da06d72f59e9121519b947f80132aac5d03cc2a1c61ea06b2102c"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"5f41a84ccee1298292baea7ad2bd89aa48e6fc8584cdd46fdbe4262c72b2b6cb"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.942614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.947026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" event={"ID":"830c9eb2-2fd1-4213-9067-d1df432bc535","Type":"ContainerStarted","Data":"45a42fc52e6365ed11e173699d7cc7a3eafe001f590daccbed6f5b8675aa8f8a"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"cf05a2079c65f7fa894d38c17011719276cd795fd9b25d6597276dcbf64ca0b7"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"d766af1fd5f17efd05a2e026f0cb1ecebf437dd8b88a2f9dd68af1ec4838776e"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951257 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"1cef49e463d58084bc213af83a2d88022b5f720f7311b0603015dc58897b07d7"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"0c4fb87720547cfbd329c2ba327ab6de0321d09fc1ecd4506f7d20bb9ed37300"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.955414 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" event={"ID":"9b846d4f-0232-4904-8b2c-26faa7b2a55d","Type":"ContainerStarted","Data":"1a9bc57ca1ab3bca140012ad8ee7f70e50a4e10c8cee3c21b87b6898cf96b159"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.955590 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.960750 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.983926 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" podStartSLOduration=2.337809579 podStartE2EDuration="8.983906989s" podCreationTimestamp="2026-02-16 14:58:58 +0000 UTC" firstStartedPulling="2026-02-16 14:58:59.589100531 +0000 UTC m=+333.774077607" lastFinishedPulling="2026-02-16 14:59:06.235197941 +0000 UTC m=+340.420175017" observedRunningTime="2026-02-16 14:59:06.967439815 +0000 UTC m=+341.152416891" watchObservedRunningTime="2026-02-16 14:59:06.983906989 +0000 UTC m=+341.168884055" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.984913 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" podStartSLOduration=2.851742894 podStartE2EDuration="5.984908078s" podCreationTimestamp="2026-02-16 14:59:01 +0000 UTC" firstStartedPulling="2026-02-16 14:59:03.039414311 +0000 UTC m=+337.224391387" lastFinishedPulling="2026-02-16 14:59:06.172579495 +0000 UTC m=+340.357556571" observedRunningTime="2026-02-16 14:59:06.97965592 +0000 UTC m=+341.164632996" watchObservedRunningTime="2026-02-16 14:59:06.984908078 +0000 UTC m=+341.169885154" Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.009055 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" podStartSLOduration=2.586066921 podStartE2EDuration="6.009011528s" podCreationTimestamp="2026-02-16 14:59:01 +0000 UTC" firstStartedPulling="2026-02-16 14:59:02.758046325 +0000 UTC m=+336.943023401" lastFinishedPulling="2026-02-16 14:59:06.180990932 +0000 UTC m=+340.365968008" observedRunningTime="2026-02-16 14:59:07.002824903 +0000 UTC m=+341.187801989" watchObservedRunningTime="2026-02-16 14:59:07.009011528 +0000 UTC m=+341.193988604" Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.970167 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"7bab3224b85bd914e698521ac62aa5681e1d44413c4a618247c33cc5e42abeb3"} Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.970579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"fbf594a2c490f645034f6d4e48201791a5706220d5b67a82a6bed6f43a1e240d"} Feb 16 14:59:08 crc kubenswrapper[4705]: I0216 14:59:08.006386 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.850021672 podStartE2EDuration="11.006344859s" podCreationTimestamp="2026-02-16 14:58:57 +0000 UTC" firstStartedPulling="2026-02-16 14:58:58.015515517 +0000 UTC m=+332.200492593" lastFinishedPulling="2026-02-16 14:59:06.171838694 +0000 UTC m=+340.356815780" observedRunningTime="2026-02-16 14:59:08.004775934 +0000 UTC m=+342.189753020" watchObservedRunningTime="2026-02-16 14:59:08.006344859 +0000 UTC m=+342.191321945" Feb 16 14:59:08 crc kubenswrapper[4705]: I0216 14:59:08.389631 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.992148 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/0.log" Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993649 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" exitCode=1 Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993695 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"718162dc642ffeb642787a06a6a625c6ba36e4bfeb82dcf2c75c23e9b2e4a519"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"bc3542bab90925422c40bd8a540b599627c5a460fa0f8327aba31ea0526307bc"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"7b29500bfd6886644b54142d0e54382aaf8a13889668df2ef6410dcae626c085"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993740 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"27f2bd7c4f49fe67b1d744f33d6dcfa8f5aedaa49d8ba1f32763a3496c9078af"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"b2b436d599380e4cd78818bbe08627018e351f142c0c8694d53d184e917f912b"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.994258 4705 scope.go:117] "RemoveContainer" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.002724 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.005887 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/0.log" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006320 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" exitCode=1 Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39"} Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006449 4705 scope.go:117] "RemoveContainer" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.007594 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:11 crc kubenswrapper[4705]: E0216 14:59:11.008528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.014719 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.018385 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:12 crc kubenswrapper[4705]: E0216 14:59:12.018897 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.790715 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.791157 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:13 crc kubenswrapper[4705]: I0216 14:59:13.023462 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:13 crc kubenswrapper[4705]: E0216 14:59:13.023907 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:21 crc kubenswrapper[4705]: I0216 14:59:21.600425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:21 crc kubenswrapper[4705]: I0216 14:59:21.601279 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:23 crc kubenswrapper[4705]: I0216 14:59:23.420340 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:24 crc kubenswrapper[4705]: I0216 14:59:24.135080 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:24 crc kubenswrapper[4705]: I0216 14:59:24.137877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"795a66c27c459d21aa086d47134bfc76ed07733769f38c683d09760d10e91e2e"} Feb 16 14:59:27 crc kubenswrapper[4705]: I0216 14:59:27.789877 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.239148 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=23.232881048 podStartE2EDuration="28.239126484s" podCreationTimestamp="2026-02-16 14:59:02 +0000 UTC" firstStartedPulling="2026-02-16 14:59:03.920653298 +0000 UTC m=+338.105630374" lastFinishedPulling="2026-02-16 14:59:08.926898734 +0000 UTC m=+343.111875810" observedRunningTime="2026-02-16 14:59:24.190077213 +0000 UTC m=+358.375054359" watchObservedRunningTime="2026-02-16 14:59:30.239126484 +0000 UTC m=+364.424103570" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.246731 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.247686 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.262157 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292202 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393898 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394024 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394058 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394084 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394108 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.395443 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.395492 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.396170 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.397001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.401959 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.402903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.412338 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.575748 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.823049 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.849694 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.852576 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.854837 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.858353 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911145 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.037285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.173404 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.190959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerStarted","Data":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.191035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerStarted","Data":"638ba6eacff71725b50db8f008ac8fcbf0b93dd5e605bf9a759eecda45bb8f53"} Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.211078 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57c5b94cd8-vqsl6" podStartSLOduration=1.211054807 podStartE2EDuration="1.211054807s" podCreationTimestamp="2026-02-16 14:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:31.206295063 +0000 UTC m=+365.391272139" watchObservedRunningTime="2026-02-16 14:59:31.211054807 +0000 UTC m=+365.396031883" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.441153 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.444062 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.447944 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.455212 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519215 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.620558 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.621031 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.621146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.622311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.622403 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.643878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.687690 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.687763 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.731001 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.779462 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.965095 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.966394 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.998227 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130115 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130400 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130446 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.153851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.199794 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" exitCode=0 Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.199914 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88"} Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.201078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerStarted","Data":"abefdacd3131f9637e18b5d6a682929bf8b75c5123f9e2a087bae18c0b3b4aa0"} Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.232873 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233334 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.235734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.235871 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.236331 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.240075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.240311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.257728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.260305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.281887 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.290244 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.727654 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.032939 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.034763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.038049 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.043082 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050896 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153279 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153308 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153881 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.173382 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207514 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7cf3246-f6e6-4509-bde8-6f5db1285126" containerID="e092a01ef5c4c273c453ceee4671ff828745ff30bfa6f985a4c5ddebbf76e6e7" exitCode=0 Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerDied","Data":"e092a01ef5c4c273c453ceee4671ff828745ff30bfa6f985a4c5ddebbf76e6e7"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerStarted","Data":"db486cf16cd77c52e3c348a4b5b35de52a37858533c08c496a3c14deddef78ac"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.208982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" event={"ID":"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb","Type":"ContainerStarted","Data":"245fdd1cda28cee16ca4bf9c05e932cc7931b6d13004c144257c82ea0d6661cc"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.209084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" event={"ID":"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb","Type":"ContainerStarted","Data":"adf0bd6eef49eed06144397724636f5ed969d1ddf657fcec82cf9c9105bb9d84"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.209891 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.212639 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" exitCode=0 Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.212772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.254584 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" podStartSLOduration=2.254563437 podStartE2EDuration="2.254563437s" podCreationTimestamp="2026-02-16 14:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:33.247328173 +0000 UTC m=+367.432305259" watchObservedRunningTime="2026-02-16 14:59:33.254563437 +0000 UTC m=+367.439540513" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.348173 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.817034 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.219872 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c9c10e6-7615-4597-91c4-4a8c67ccf112" containerID="7c12c180da63a0da85d707b61f6b0ea37b59a3e80d87b1afefa45e70edc3b011" exitCode=0 Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.220026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerDied","Data":"7c12c180da63a0da85d707b61f6b0ea37b59a3e80d87b1afefa45e70edc3b011"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.220331 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerStarted","Data":"f840be36fd4c2e71a952d516da4bd3e8ba40207d34a76fb2a691ea36620eeb72"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.224025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerStarted","Data":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.292298 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2v29" podStartSLOduration=2.706469627 podStartE2EDuration="4.292277137s" podCreationTimestamp="2026-02-16 14:59:30 +0000 UTC" firstStartedPulling="2026-02-16 14:59:32.204124218 +0000 UTC m=+366.389101294" lastFinishedPulling="2026-02-16 14:59:33.789931728 +0000 UTC m=+367.974908804" observedRunningTime="2026-02-16 14:59:34.288239484 +0000 UTC m=+368.473216560" watchObservedRunningTime="2026-02-16 14:59:34.292277137 +0000 UTC m=+368.477254213" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.433120 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.435695 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.438983 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.443653 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.680265 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.680884 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.702588 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.770679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.210275 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.231891 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerStarted","Data":"2fe1afb2218ad27aa64a34391ad945ffc3289fcf06444335463fd768ddee689c"} Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.235183 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7cf3246-f6e6-4509-bde8-6f5db1285126" containerID="c0c79e3f53b996269456c02ee6a6774f2b46f3bcf728aff3ab1897d9622b86d5" exitCode=0 Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.235250 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerDied","Data":"c0c79e3f53b996269456c02ee6a6774f2b46f3bcf728aff3ab1897d9622b86d5"} Feb 16 14:59:37 crc kubenswrapper[4705]: E0216 14:59:37.108969 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.252462 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c9c10e6-7615-4597-91c4-4a8c67ccf112" containerID="60e8f536beaa6982622aac5d46efbaf8b72a6459fc8c7a3c13f7aab229f379fe" exitCode=0 Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.252532 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerDied","Data":"60e8f536beaa6982622aac5d46efbaf8b72a6459fc8c7a3c13f7aab229f379fe"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.254664 4705 generic.go:334] "Generic (PLEG): container finished" podID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerID="7e052e17d6ead6b7fdfd5a184438404e71c8236333bb41c9f4c77f29414f73c5" exitCode=0 Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.254762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerDied","Data":"7e052e17d6ead6b7fdfd5a184438404e71c8236333bb41c9f4c77f29414f73c5"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.257942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerStarted","Data":"f05d3d299868582465f9bd1a5cc5b56cde2fbd6fe692c396cd238a55a94f3980"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.303981 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6x46" podStartSLOduration=3.652180268 podStartE2EDuration="6.303961255s" podCreationTimestamp="2026-02-16 14:59:31 +0000 UTC" firstStartedPulling="2026-02-16 14:59:33.209434404 +0000 UTC m=+367.394411480" lastFinishedPulling="2026-02-16 14:59:35.861215391 +0000 UTC m=+370.046192467" observedRunningTime="2026-02-16 14:59:37.29882594 +0000 UTC m=+371.483803016" watchObservedRunningTime="2026-02-16 14:59:37.303961255 +0000 UTC m=+371.488938331" Feb 16 14:59:38 crc kubenswrapper[4705]: I0216 14:59:38.267395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerStarted","Data":"6be444772eb44f090f34f396cbf185e43513811ca2d8778d41a10071e164383f"} Feb 16 14:59:38 crc kubenswrapper[4705]: I0216 14:59:38.296048 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wptq4" podStartSLOduration=1.85856395 podStartE2EDuration="5.296015277s" podCreationTimestamp="2026-02-16 14:59:33 +0000 UTC" firstStartedPulling="2026-02-16 14:59:34.222214151 +0000 UTC m=+368.407191227" lastFinishedPulling="2026-02-16 14:59:37.659665468 +0000 UTC m=+371.844642554" observedRunningTime="2026-02-16 14:59:38.285524691 +0000 UTC m=+372.470501787" watchObservedRunningTime="2026-02-16 14:59:38.296015277 +0000 UTC m=+372.480992363" Feb 16 14:59:39 crc kubenswrapper[4705]: I0216 14:59:39.280865 4705 generic.go:334] "Generic (PLEG): container finished" podID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerID="570497e729fea4bc5d83de6f9c83cb3b427c22f83907d8ee06734c839c14d70b" exitCode=0 Feb 16 14:59:39 crc kubenswrapper[4705]: I0216 14:59:39.282198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerDied","Data":"570497e729fea4bc5d83de6f9c83cb3b427c22f83907d8ee06734c839c14d70b"} Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.576574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.576747 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.583773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.174032 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.174565 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.235551 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.301982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.363259 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.392926 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.614180 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.619871 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.779844 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.779908 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.833098 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.305843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerStarted","Data":"4d1fe6b812c56a820cea55ee65b2fee9df6b1cf717d9ce392c279a8c717277c9"} Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.327211 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzbk2" podStartSLOduration=4.029047106 podStartE2EDuration="8.32719229s" podCreationTimestamp="2026-02-16 14:59:34 +0000 UTC" firstStartedPulling="2026-02-16 14:59:37.255823617 +0000 UTC m=+371.440800693" lastFinishedPulling="2026-02-16 14:59:41.553968801 +0000 UTC m=+375.738945877" observedRunningTime="2026-02-16 14:59:42.326701536 +0000 UTC m=+376.511678612" watchObservedRunningTime="2026-02-16 14:59:42.32719229 +0000 UTC m=+376.512169366" Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.352295 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.348318 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.348871 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.402884 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.376184 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.770950 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.771437 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:45 crc kubenswrapper[4705]: I0216 14:59:45.826260 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzbk2" podUID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerName="registry-server" probeResult="failure" output=< Feb 16 14:59:45 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:59:45 crc kubenswrapper[4705]: > Feb 16 14:59:52 crc kubenswrapper[4705]: I0216 14:59:52.299951 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:52 crc kubenswrapper[4705]: I0216 14:59:52.370754 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:59:54 crc kubenswrapper[4705]: I0216 14:59:54.852811 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:54 crc kubenswrapper[4705]: I0216 14:59:54.930688 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.203857 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.206138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.208480 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.208723 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.219507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.321790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.322080 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.322325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.423980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.424469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.424529 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.425451 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.436382 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.449722 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.575207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.775570 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: W0216 15:00:00.791079 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c9b6f2_f412_4860_9524_8b671c477f83.slice/crio-cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d WatchSource:0}: Error finding container cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d: Status 404 returned error can't find the container with id cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.454526 4705 generic.go:334] "Generic (PLEG): container finished" podID="24c9b6f2-f412-4860-9524-8b671c477f83" containerID="6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365" exitCode=0 Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.454978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerDied","Data":"6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365"} Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.455019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerStarted","Data":"cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d"} Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684565 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684663 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684729 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.685528 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.685607 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" gracePeriod=600 Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.469904 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" exitCode=0 Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470932 4705 scope.go:117] "RemoveContainer" containerID="8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.721033 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780294 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780543 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.781819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume" (OuterVolumeSpecName: "config-volume") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.785691 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl" (OuterVolumeSpecName: "kube-api-access-m96xl") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "kube-api-access-m96xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.790256 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.793513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.823082 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883084 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883503 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883651 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerDied","Data":"cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d"} Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481627 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481182 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.535679 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.409480 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fnrqq" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" containerID="cri-o://3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" gracePeriod=15 Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.828176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fnrqq_ee710a8b-3390-4749-949f-e8efa983b1ae/console/0.log" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.828659 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.961692 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962398 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962518 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962650 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963779 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config" (OuterVolumeSpecName: "console-config") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.965654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca" (OuterVolumeSpecName: "service-ca") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.966858 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967007 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967040 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967060 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.970706 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs" (OuterVolumeSpecName: "kube-api-access-stnhs") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "kube-api-access-stnhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.971603 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.974508 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.068766 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.069017 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.069117 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525511 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fnrqq_ee710a8b-3390-4749-949f-e8efa983b1ae/console/0.log" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525602 4705 generic.go:334] "Generic (PLEG): container finished" podID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" exitCode=2 Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerDied","Data":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerDied","Data":"7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2"} Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525761 4705 scope.go:117] "RemoveContainer" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.526063 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.553098 4705 scope.go:117] "RemoveContainer" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: E0216 15:00:07.554690 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": container with ID starting with 3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519 not found: ID does not exist" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.554754 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} err="failed to get container status \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": rpc error: code = NotFound desc = could not find container \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": container with ID starting with 3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519 not found: ID does not exist" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.572768 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.578783 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 15:00:08 crc kubenswrapper[4705]: I0216 15:00:08.430556 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" path="/var/lib/kubelet/pods/ee710a8b-3390-4749-949f-e8efa983b1ae/volumes" Feb 16 15:00:17 crc kubenswrapper[4705]: I0216 15:00:17.427869 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" containerID="cri-o://8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" gracePeriod=30 Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.039107 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.105441 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106268 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106356 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106438 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106577 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.107069 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.107581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx" (OuterVolumeSpecName: "kube-api-access-xs7sx") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "kube-api-access-xs7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111346 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.119521 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.137026 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208533 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208603 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208633 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208664 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208688 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208710 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617472 4705 generic.go:334] "Generic (PLEG): container finished" podID="347b9dab-29d3-4126-994e-6501af72985a" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" exitCode=0 Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerDied","Data":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617548 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerDied","Data":"a85e7e62d04fb828a3650bdfb354f55b8cca777243fccbeb90166d171d6b20fc"} Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617585 4705 scope.go:117] "RemoveContainer" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.639311 4705 scope.go:117] "RemoveContainer" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: E0216 15:00:19.639767 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": container with ID starting with 8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3 not found: ID does not exist" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.639809 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} err="failed to get container status \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": rpc error: code = NotFound desc = could not find container \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": container with ID starting with 8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3 not found: ID does not exist" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.651930 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.656075 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 15:00:20 crc kubenswrapper[4705]: I0216 15:00:20.431139 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347b9dab-29d3-4126-994e-6501af72985a" path="/var/lib/kubelet/pods/347b9dab-29d3-4126-994e-6501af72985a/volumes" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.635313 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636565 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636589 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636625 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636657 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636673 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636875 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636905 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636926 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.637694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.656109 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750850 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750905 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750943 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751245 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751465 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852813 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852964 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853230 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854550 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854666 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.855153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.861089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.861777 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.871955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.023195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.536139 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:01 crc kubenswrapper[4705]: W0216 15:01:01.549190 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80172f35_e30c_409c_b28e_eb65d41dd384.slice/crio-62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b WatchSource:0}: Error finding container 62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b: Status 404 returned error can't find the container with id 62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.967051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerStarted","Data":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.967136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerStarted","Data":"62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b"} Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.988952 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb776c56c-pzs4q" podStartSLOduration=1.988920786 podStartE2EDuration="1.988920786s" podCreationTimestamp="2026-02-16 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:01:01.988853994 +0000 UTC m=+456.173831070" watchObservedRunningTime="2026-02-16 15:01:01.988920786 +0000 UTC m=+456.173897902" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.023507 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.026632 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.031448 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.071721 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.152657 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.229135 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57c5b94cd8-vqsl6" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" containerID="cri-o://bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" gracePeriod=15 Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.628895 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c5b94cd8-vqsl6_32a46224-2f51-4cc5-9541-d1e5ac0d98eb/console/0.log" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.629493 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673796 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673869 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673990 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.674029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.674101 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.675156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.675288 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config" (OuterVolumeSpecName: "console-config") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.676120 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca" (OuterVolumeSpecName: "service-ca") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.676143 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g" (OuterVolumeSpecName: "kube-api-access-hjj4g") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "kube-api-access-hjj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777150 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777193 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777206 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777221 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777236 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777247 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777259 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.282913 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c5b94cd8-vqsl6_32a46224-2f51-4cc5-9541-d1e5ac0d98eb/console/0.log" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283439 4705 generic.go:334] "Generic (PLEG): container finished" podID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" exitCode=2 Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283513 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerDied","Data":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerDied","Data":"638ba6eacff71725b50db8f008ac8fcbf0b93dd5e605bf9a759eecda45bb8f53"} Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283608 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283657 4705 scope.go:117] "RemoveContainer" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.323074 4705 scope.go:117] "RemoveContainer" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: E0216 15:01:37.323870 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": container with ID starting with bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836 not found: ID does not exist" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.323934 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} err="failed to get container status \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": rpc error: code = NotFound desc = could not find container \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": container with ID starting with bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836 not found: ID does not exist" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.329192 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.335239 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:38 crc kubenswrapper[4705]: I0216 15:01:38.429347 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" path="/var/lib/kubelet/pods/32a46224-2f51-4cc5-9541-d1e5ac0d98eb/volumes" Feb 16 15:02:01 crc kubenswrapper[4705]: I0216 15:02:01.684462 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:01 crc kubenswrapper[4705]: I0216 15:02:01.685081 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:31 crc kubenswrapper[4705]: I0216 15:02:31.684767 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:31 crc kubenswrapper[4705]: I0216 15:02:31.685753 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.132250 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:57 crc kubenswrapper[4705]: E0216 15:02:57.133117 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.133132 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.133258 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.134181 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.137930 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.160144 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.224982 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.225098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.225158 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326724 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326783 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.327326 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.327631 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.355517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.471755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.779200 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:58 crc kubenswrapper[4705]: I0216 15:02:58.008579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerStarted","Data":"0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c"} Feb 16 15:02:58 crc kubenswrapper[4705]: I0216 15:02:58.009042 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerStarted","Data":"af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5"} Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.019892 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c" exitCode=0 Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.020001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c"} Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.022513 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.684699 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.685705 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.685785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.686838 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.686911 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" gracePeriod=600 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.052586 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" exitCode=0 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.052776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.053302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.053339 4705 scope.go:117] "RemoveContainer" containerID="a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.056664 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="a694af306eb8d0590e45cc51974fa037a409725ee7c9141fd04fa8be085ed648" exitCode=0 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.056772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"a694af306eb8d0590e45cc51974fa037a409725ee7c9141fd04fa8be085ed648"} Feb 16 15:03:03 crc kubenswrapper[4705]: I0216 15:03:03.065214 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="0b63100b539042dc74bc1fc2285d16764f13298cc19566f13ed0b77025455be3" exitCode=0 Feb 16 15:03:03 crc kubenswrapper[4705]: I0216 15:03:03.065674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"0b63100b539042dc74bc1fc2285d16764f13298cc19566f13ed0b77025455be3"} Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.294643 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.478439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.479149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.479281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.481326 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle" (OuterVolumeSpecName: "bundle") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.488671 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq" (OuterVolumeSpecName: "kube-api-access-56zxq") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "kube-api-access-56zxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.504485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util" (OuterVolumeSpecName: "util") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581204 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581269 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581288 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5"} Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086191 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5" Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086294 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.334945 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336064 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" containerID="cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336142 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" containerID="cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336193 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" containerID="cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336235 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336270 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" containerID="cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336237 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" containerID="cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336308 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" containerID="cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.367999 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" containerID="cri-o://38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" gracePeriod=30 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.118711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.121227 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.121861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122361 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122414 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122424 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122433 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122440 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" exitCode=143 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122447 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" exitCode=143 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122812 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.123015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122894 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.123078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.124834 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126250 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126301 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" exitCode=2 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126936 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.127178 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.156492 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.536463 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.537480 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.537926 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586520 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586579 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586651 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586638 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586692 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586726 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586744 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586796 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586862 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586974 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587042 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587045 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587456 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash" (OuterVolumeSpecName: "host-slash") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587721 4705 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587736 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587751 4705 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587760 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587789 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587813 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587833 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log" (OuterVolumeSpecName: "node-log") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587870 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587914 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587934 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket" (OuterVolumeSpecName: "log-socket") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587952 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587969 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588432 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588996 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.593071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.601835 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5" (OuterVolumeSpecName: "kube-api-access-67wc5") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "kube-api-access-67wc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607586 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drlsg"] Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607924 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="util" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607948 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="util" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607958 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607968 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607978 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608009 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608024 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608032 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608046 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608053 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608064 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608071 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608097 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608106 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608118 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608126 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608138 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="pull" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608147 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="pull" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608160 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608167 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608179 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608186 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608194 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608201 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608213 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kubecfg-setup" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608221 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kubecfg-setup" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608232 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608239 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608404 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608417 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608430 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608440 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608450 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608462 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608471 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608482 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608500 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608509 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608517 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608678 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608687 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608833 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.609083 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.612893 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.637787 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689545 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689662 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689685 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689933 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690013 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690154 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690476 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690493 4705 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690518 4705 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690527 4705 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690538 4705 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690547 4705 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690557 4705 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690567 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690591 4705 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690601 4705 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690611 4705 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690621 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690631 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690640 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690649 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690674 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792230 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792359 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792402 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792491 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792561 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792748 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793584 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794100 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794519 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794583 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794902 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794934 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.795469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.798401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.824717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.957731 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.141714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143436 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143799 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143847 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143958 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"42045b84aca42a832078848d2b0993c882266e872a0d71d75f9c0c7f12bd5a14"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.144012 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.144231 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.150711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156491 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc67360e-7dc8-4772-bc68-60709d7e4e31" containerID="3b59ce49ed456ee51dfd98110d67b37ffe27a7441b0fd7f28142a8cad073dbca" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerDied","Data":"3b59ce49ed456ee51dfd98110d67b37ffe27a7441b0fd7f28142a8cad073dbca"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"54d6ea8b6a911f8ce91e71ee4a2848ae6c8a5b2ddedf3fc0640aafaa3a6480e7"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.201599 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.235278 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.271517 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.305322 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.313443 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.326532 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.341980 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.365103 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.386606 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.403985 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.429615 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" path="/var/lib/kubelet/pods/59e81100-8761-4e5f-bab6-07df1c795ccb/volumes" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.439638 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.440433 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440464 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} err="failed to get container status \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440488 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.440775 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440798 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} err="failed to get container status \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440811 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.441099 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.441118 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} err="failed to get container status \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.441130 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.444852 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.444884 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} err="failed to get container status \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.444901 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.445305 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445379 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} err="failed to get container status \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445416 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.445908 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445961 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} err="failed to get container status \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.446008 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.449187 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449215 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} err="failed to get container status \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449231 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.449577 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449604 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} err="failed to get container status \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449622 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.450139 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450161 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} err="failed to get container status \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450176 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450758 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} err="failed to get container status \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450782 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.451039 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} err="failed to get container status \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.451063 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452295 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} err="failed to get container status \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452322 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452874 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} err="failed to get container status \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452891 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} err="failed to get container status \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453171 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453593 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} err="failed to get container status \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453648 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.454053 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} err="failed to get container status \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.454101 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.455722 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} err="failed to get container status \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.455760 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.456073 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} err="failed to get container status \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.165830 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"95039a12e6b758bdbc6f4a8e014a8d1561a5920d131ca658b288cc3ad6d9911d"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"bf27d8e3d3c90b79f3ad11747ba3df25378ba91839964917fb4213e922deb5d9"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"64e0709dc3d0b164095a7b3bd49d3c5ba3b65a453a0459d1dcb913d2802e63b4"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"b0d4af810793c8f3b7c153ed399cb1e9fbb2b22f6af363011235403128f39352"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"1bb2013977fdc5560ac4027cfd7c3cb8222455e312e4a8d6d71fa8ac71bb11ea"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"aa84d4f9b1385c793207e1e5d810609b143dd91a6dfd00c180575a132636b3a4"} Feb 16 15:03:14 crc kubenswrapper[4705]: I0216 15:03:14.190080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"f1ebf74cabced645dcbd8c68f2343636812faf0735da7e5bb7423c97c116faac"} Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.666704 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.667919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.669888 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.670320 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-5cct8" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.670459 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.715064 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.716061 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.720079 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-94grs" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.720096 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.732256 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.735577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.789061 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.890825 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.890929 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891009 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891784 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.894461 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-h9bq9" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.894953 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.918638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.985919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002874 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002934 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002953 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.007791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.008139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.008312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015743 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015837 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015890 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.021816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.030732 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.050214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071678 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071780 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071820 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071893 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.081984 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082080 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082109 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082171 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.095241 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.096144 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.098395 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-r75dd" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104246 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104478 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.108919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.122674 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.205297 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.206104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.206172 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.207197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.211923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"021e6d99f83ee98863067d674a51fd9d911769ff59e5de6efe7131658cb81c64"} Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.212525 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.212579 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.225841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242613 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242710 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242740 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242798 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.251167 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" podStartSLOduration=7.251145568 podStartE2EDuration="7.251145568s" podCreationTimestamp="2026-02-16 15:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:03:16.248497344 +0000 UTC m=+590.433474420" watchObservedRunningTime="2026-02-16 15:03:16.251145568 +0000 UTC m=+590.436122644" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.261806 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.414300 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476503 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476626 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476660 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476743 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.605696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.610993 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.611146 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.611539 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638572 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638674 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638700 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638753 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.647615 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654052 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654194 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654794 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.665146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.682694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.683305 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688560 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688652 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688685 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688745 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711211 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711305 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711330 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711438 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219135 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.220096 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.298302 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314697 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314885 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314957 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.315065 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319255 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319343 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319383 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319444 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:24 crc kubenswrapper[4705]: I0216 15:03:24.420406 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:24 crc kubenswrapper[4705]: E0216 15:03:24.421145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 15:03:26 crc kubenswrapper[4705]: I0216 15:03:26.782419 4705 scope.go:117] "RemoveContainer" containerID="5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" Feb 16 15:03:28 crc kubenswrapper[4705]: I0216 15:03:28.418596 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: I0216 15:03:28.419525 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457032 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457137 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457160 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457221 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:30 crc kubenswrapper[4705]: I0216 15:03:30.419379 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: I0216 15:03:30.420319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448725 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448816 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448840 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:31 crc kubenswrapper[4705]: I0216 15:03:31.419029 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: I0216 15:03:31.420293 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459697 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459827 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459872 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419042 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419178 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419646 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.420069 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.489968 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490050 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490078 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490134 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502610 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502723 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502756 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502836 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:38 crc kubenswrapper[4705]: I0216 15:03:38.419629 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.363703 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.364139 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"fd3158954c0966f76c5348ec79ca5afd950e93895e4999b2dd8f3c5211948c15"} Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.987615 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.418713 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.419595 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.869312 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:41 crc kubenswrapper[4705]: W0216 15:03:41.879011 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59894fc4_090e_4e57_84d9_c6fdbe5f3ceb.slice/crio-8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a WatchSource:0}: Error finding container 8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a: Status 404 returned error can't find the container with id 8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a Feb 16 15:03:42 crc kubenswrapper[4705]: I0216 15:03:42.385887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" event={"ID":"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb","Type":"ContainerStarted","Data":"8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a"} Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.418845 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.419702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.690136 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:43 crc kubenswrapper[4705]: W0216 15:03:43.705690 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90dedac_68bb_409d_9860_af59c6c7d172.slice/crio-37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4 WatchSource:0}: Error finding container 37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4: Status 404 returned error can't find the container with id 37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4 Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.404541 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" event={"ID":"b90dedac-68bb-409d-9860-af59c6c7d172","Type":"ContainerStarted","Data":"37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4"} Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.419063 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.419923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:45 crc kubenswrapper[4705]: I0216 15:03:45.431797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:45 crc kubenswrapper[4705]: I0216 15:03:45.435150 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:46 crc kubenswrapper[4705]: I0216 15:03:46.394915 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:46 crc kubenswrapper[4705]: W0216 15:03:46.626421 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81328a1c_32d6_4ce6_9139_8418d2e8fa52.slice/crio-ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5 WatchSource:0}: Error finding container ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5: Status 404 returned error can't find the container with id ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5 Feb 16 15:03:46 crc kubenswrapper[4705]: I0216 15:03:46.830629 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.418716 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.419583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.435464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" event={"ID":"81328a1c-32d6-4ce6-9139-8418d2e8fa52","Type":"ContainerStarted","Data":"ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.440589 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" event={"ID":"5510c272-cd32-4850-a9fa-daff2e045b92","Type":"ContainerStarted","Data":"32433cc64397b2492b2807c1ff47c03a3a3212494a85bdb06cb8b013277e21cd"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.443517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" event={"ID":"b90dedac-68bb-409d-9860-af59c6c7d172","Type":"ContainerStarted","Data":"54aa794afbb8498da64d8b821fca306f4a783efc08888ed3cd08a7c8f1133617"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.448802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" event={"ID":"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb","Type":"ContainerStarted","Data":"f0333f19bc32e9d1033d8965933ee4967ba988003a995356f91685bb2f376c90"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.487314 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podStartSLOduration=29.498879018 podStartE2EDuration="32.487286223s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:43.710111726 +0000 UTC m=+617.895088812" lastFinishedPulling="2026-02-16 15:03:46.698518911 +0000 UTC m=+620.883496017" observedRunningTime="2026-02-16 15:03:47.466080599 +0000 UTC m=+621.651057715" watchObservedRunningTime="2026-02-16 15:03:47.487286223 +0000 UTC m=+621.672263329" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.760485 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podStartSLOduration=28.007618422 podStartE2EDuration="32.760459985s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:41.882744573 +0000 UTC m=+616.067721649" lastFinishedPulling="2026-02-16 15:03:46.635586136 +0000 UTC m=+620.820563212" observedRunningTime="2026-02-16 15:03:47.510834334 +0000 UTC m=+621.695811420" watchObservedRunningTime="2026-02-16 15:03:47.760459985 +0000 UTC m=+621.945437061" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.776898 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.456868 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" event={"ID":"8acc36de-d26d-44cd-bad6-d31f0a4a4520","Type":"ContainerStarted","Data":"27519a0110f1f01b3d8b6a5d5886fafe408d9dc7427a86d52a91244ae4b6fa4a"} Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.461815 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" event={"ID":"81328a1c-32d6-4ce6-9139-8418d2e8fa52","Type":"ContainerStarted","Data":"bcfa20245fc1ebf1f3aec8c87d879ec7e94c99f98deafe4047172d958eb1aeab"} Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.490534 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podStartSLOduration=32.744618451 podStartE2EDuration="33.490508181s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:46.633865287 +0000 UTC m=+620.818842363" lastFinishedPulling="2026-02-16 15:03:47.379754987 +0000 UTC m=+621.564732093" observedRunningTime="2026-02-16 15:03:48.486518599 +0000 UTC m=+622.671495675" watchObservedRunningTime="2026-02-16 15:03:48.490508181 +0000 UTC m=+622.675485267" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.513227 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" event={"ID":"5510c272-cd32-4850-a9fa-daff2e045b92","Type":"ContainerStarted","Data":"8bb26ab6bfe59d817fad1bb9d57dcc847eecd3dae6f415211e9e8d4d90b0d0c5"} Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.514852 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.516737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" event={"ID":"8acc36de-d26d-44cd-bad6-d31f0a4a4520","Type":"ContainerStarted","Data":"2bd79f14ae36d970841bd2e127f046ae2d5524516b0344d8522ee17390ef42c2"} Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.516902 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.527956 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.544154 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podStartSLOduration=32.626454217 podStartE2EDuration="38.544136189s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:46.845598976 +0000 UTC m=+621.030576052" lastFinishedPulling="2026-02-16 15:03:52.763280948 +0000 UTC m=+626.948258024" observedRunningTime="2026-02-16 15:03:53.539799477 +0000 UTC m=+627.724776563" watchObservedRunningTime="2026-02-16 15:03:53.544136189 +0000 UTC m=+627.729113265" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.573910 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podStartSLOduration=32.614310463 podStartE2EDuration="37.573887743s" podCreationTimestamp="2026-02-16 15:03:16 +0000 UTC" firstStartedPulling="2026-02-16 15:03:47.786990109 +0000 UTC m=+621.971967185" lastFinishedPulling="2026-02-16 15:03:52.746567389 +0000 UTC m=+626.931544465" observedRunningTime="2026-02-16 15:03:53.56629571 +0000 UTC m=+627.751272846" watchObservedRunningTime="2026-02-16 15:03:53.573887743 +0000 UTC m=+627.758864809" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.645269 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.647153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650327 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650501 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9wjm5" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650559 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.655686 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.664976 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.672565 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.675266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mn2f8" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.681700 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.683286 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.685177 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nd789" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.686165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615366 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.665319 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.679360 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.687499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.716916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.717050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.740535 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.740844 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.876860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.889736 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.895955 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.387373 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.437173 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:02 crc kubenswrapper[4705]: W0216 15:04:02.454765 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc1f84cc_974e_42c8_8b49_120dfe74aa0f.slice/crio-d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6 WatchSource:0}: Error finding container d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6: Status 404 returned error can't find the container with id d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6 Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.455556 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.675176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-46spv" event={"ID":"b6695119-142b-40cb-bdd8-e0e1f55e0e61","Type":"ContainerStarted","Data":"5b7f181475f17306b492564d008685798ca79631f1970db129f9d36580874bf4"} Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.679216 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" event={"ID":"ca614a32-6a4c-4802-8cb5-a927aac7a59a","Type":"ContainerStarted","Data":"99ca1bcc126be53996c5380e0dce62da80c4ec330c0e0c5641497bcd317fd910"} Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.680489 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" event={"ID":"fc1f84cc-974e-42c8-8b49-120dfe74aa0f","Type":"ContainerStarted","Data":"d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.417404 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.718447 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-46spv" event={"ID":"b6695119-142b-40cb-bdd8-e0e1f55e0e61","Type":"ContainerStarted","Data":"4545222601f9f06cd26254dbf52fe9f2e960e72f003261b59875146b1cbb42a7"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.720612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" event={"ID":"ca614a32-6a4c-4802-8cb5-a927aac7a59a","Type":"ContainerStarted","Data":"337e968ec4e10aa825e3261df4185ac89feb16f9c242af4eff79221d0637b53f"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.722185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" event={"ID":"fc1f84cc-974e-42c8-8b49-120dfe74aa0f","Type":"ContainerStarted","Data":"f295e69cc8831f0062d92f5967ad40485e3a1d75ed48166739c5d90c37f0aedc"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.722331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.739408 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-46spv" podStartSLOduration=3.044082255 podStartE2EDuration="6.739367506s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.442544814 +0000 UTC m=+636.627521880" lastFinishedPulling="2026-02-16 15:04:06.137830055 +0000 UTC m=+640.322807131" observedRunningTime="2026-02-16 15:04:06.737602017 +0000 UTC m=+640.922579093" watchObservedRunningTime="2026-02-16 15:04:06.739367506 +0000 UTC m=+640.924344572" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.777436 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" podStartSLOduration=3.017783617 podStartE2EDuration="6.777410922s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.45843742 +0000 UTC m=+636.643414496" lastFinishedPulling="2026-02-16 15:04:06.218064725 +0000 UTC m=+640.403041801" observedRunningTime="2026-02-16 15:04:06.760601911 +0000 UTC m=+640.945578997" watchObservedRunningTime="2026-02-16 15:04:06.777410922 +0000 UTC m=+640.962387998" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.781337 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" podStartSLOduration=3.03964357 podStartE2EDuration="6.781324132s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.39356477 +0000 UTC m=+636.578541846" lastFinishedPulling="2026-02-16 15:04:06.135245332 +0000 UTC m=+640.320222408" observedRunningTime="2026-02-16 15:04:06.775346344 +0000 UTC m=+640.960323430" watchObservedRunningTime="2026-02-16 15:04:06.781324132 +0000 UTC m=+640.966301208" Feb 16 15:04:11 crc kubenswrapper[4705]: I0216 15:04:11.900610 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.621405 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.625227 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.627991 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.650315 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709989 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811774 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811848 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.812332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.812714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.835030 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.951795 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.995708 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.997849 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.013774 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225257 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225370 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.226036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.226260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.255056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.305627 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.388768 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.823235 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:35 crc kubenswrapper[4705]: W0216 15:04:35.827024 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1187d92_0ea8_46f2_9784_ddea0852aa5f.slice/crio-15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3 WatchSource:0}: Error finding container 15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3: Status 404 returned error can't find the container with id 15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3 Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.948822 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerStarted","Data":"15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3"} Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951441 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="4263c18fa3994bb3a2cb96b7de43e5a88c3cdce9347f094c3adb74ac109bd8f7" exitCode=0 Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"4263c18fa3994bb3a2cb96b7de43e5a88c3cdce9347f094c3adb74ac109bd8f7"} Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerStarted","Data":"81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676"} Feb 16 15:04:36 crc kubenswrapper[4705]: I0216 15:04:36.975212 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="8db8c4258cc92e68c0a4e62af157ff619a4ff3a159d989daf517b07de4a4941a" exitCode=0 Feb 16 15:04:36 crc kubenswrapper[4705]: I0216 15:04:36.975639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"8db8c4258cc92e68c0a4e62af157ff619a4ff3a159d989daf517b07de4a4941a"} Feb 16 15:04:37 crc kubenswrapper[4705]: I0216 15:04:37.986068 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="ca29647308473811807d28967d66117cc8ee0b3e41a9fe4539d2f4b6eee494b2" exitCode=0 Feb 16 15:04:37 crc kubenswrapper[4705]: I0216 15:04:37.986157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"ca29647308473811807d28967d66117cc8ee0b3e41a9fe4539d2f4b6eee494b2"} Feb 16 15:04:38 crc kubenswrapper[4705]: I0216 15:04:38.996938 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="43b0ce933e00fc3cdad93d5e9cd92a0063cfa5f531c6ee046a18569e7fdc3778" exitCode=0 Feb 16 15:04:38 crc kubenswrapper[4705]: I0216 15:04:38.997414 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"43b0ce933e00fc3cdad93d5e9cd92a0063cfa5f531c6ee046a18569e7fdc3778"} Feb 16 15:04:39 crc kubenswrapper[4705]: I0216 15:04:39.002653 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="f874d09a5619ee0aa8c5f7b601d48b8aee377a4d3ab31c3d506f349dcbb4dca4" exitCode=0 Feb 16 15:04:39 crc kubenswrapper[4705]: I0216 15:04:39.002713 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"f874d09a5619ee0aa8c5f7b601d48b8aee377a4d3ab31c3d506f349dcbb4dca4"} Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.013870 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="7b4d0aa930b07d0887b5bd246f294f7ffaa52f9ad69d88f75940c3fac48b22e4" exitCode=0 Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.014585 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"7b4d0aa930b07d0887b5bd246f294f7ffaa52f9ad69d88f75940c3fac48b22e4"} Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.351843 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438231 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438342 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438477 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.439690 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle" (OuterVolumeSpecName: "bundle") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.448567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs" (OuterVolumeSpecName: "kube-api-access-582gs") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "kube-api-access-582gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.457803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util" (OuterVolumeSpecName: "util") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541286 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541341 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541357 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676"} Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026434 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026470 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.354421 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455265 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455398 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455499 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.457489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle" (OuterVolumeSpecName: "bundle") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.460578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl" (OuterVolumeSpecName: "kube-api-access-zsdfl") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "kube-api-access-zsdfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.471558 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util" (OuterVolumeSpecName: "util") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558757 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558834 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558857 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3"} Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035693 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3" Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035744 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.747633 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748510 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748526 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748539 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748546 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748557 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748564 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748577 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748584 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748596 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748602 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748619 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748635 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748777 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748795 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.749538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.751080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.752564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.752900 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753052 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753174 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-sjnz6" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753506 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.765559 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909925 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011561 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.012530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.017301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.020966 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.025034 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.046305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.066937 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.555484 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:52 crc kubenswrapper[4705]: I0216 15:04:52.118515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"7bd1552fa2d85fafc6c2973bcbde5a096f7b8ec9bda8c7925dabbf9774def2ff"} Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.825414 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.827424 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.829984 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.830308 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-cfqgz" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.830458 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.843402 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.927802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.029909 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.050896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.144840 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.161165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"0e84ca013ecd33f8ae86af8ed8895a2fd615863534e538c0b29af4c75f33733e"} Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.587252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:58 crc kubenswrapper[4705]: I0216 15:04:58.170858 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" event={"ID":"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9","Type":"ContainerStarted","Data":"30c80eb188a0efbcb14073af18fc2c2116d55b33c31058db857dbf3f2c23d1ee"} Feb 16 15:05:01 crc kubenswrapper[4705]: I0216 15:05:01.684835 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:05:01 crc kubenswrapper[4705]: I0216 15:05:01.685179 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.250760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"ecf3866f8c6a9cba0642e4e7162243f23c560e99bcf384e3953c330cf4a73284"} Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.252874 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.253537 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.253662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" event={"ID":"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9","Type":"ContainerStarted","Data":"edbf86ced194dcd9b2596b8532b9883b1f36f25e9748d36d7a1990702f108154"} Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.286135 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" podStartSLOduration=1.888619209 podStartE2EDuration="16.286097133s" podCreationTimestamp="2026-02-16 15:04:50 +0000 UTC" firstStartedPulling="2026-02-16 15:04:51.571732862 +0000 UTC m=+685.756709938" lastFinishedPulling="2026-02-16 15:05:05.969210786 +0000 UTC m=+700.154187862" observedRunningTime="2026-02-16 15:05:06.27742309 +0000 UTC m=+700.462400186" watchObservedRunningTime="2026-02-16 15:05:06.286097133 +0000 UTC m=+700.471074209" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.345748 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" podStartSLOduration=1.94887196 podStartE2EDuration="10.345715805s" podCreationTimestamp="2026-02-16 15:04:56 +0000 UTC" firstStartedPulling="2026-02-16 15:04:57.598952626 +0000 UTC m=+691.783929702" lastFinishedPulling="2026-02-16 15:05:05.995796481 +0000 UTC m=+700.180773547" observedRunningTime="2026-02-16 15:05:06.339027828 +0000 UTC m=+700.524004924" watchObservedRunningTime="2026-02-16 15:05:06.345715805 +0000 UTC m=+700.530692881" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.717853 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.719441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.721868 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.722185 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.728803 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.818625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.818810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.920683 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.921134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.934782 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.935011 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a77f1d7d0ca7e926c7c2bebcbec44eb37e90e66a58abd69d693dca1682a22d00/globalmount\"" pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.958611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.987837 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.054074 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.277453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.304331 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cc3a618d-0da6-49be-a4bc-3e3166db35e8","Type":"ContainerStarted","Data":"aa6b9dbab1f0465af37cc4b896964331d66780c0758a80d67ca96d04dc8d190a"} Feb 16 15:05:17 crc kubenswrapper[4705]: I0216 15:05:17.341780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cc3a618d-0da6-49be-a4bc-3e3166db35e8","Type":"ContainerStarted","Data":"378a7e88f34a51eaa2c0fb8bb3936de544e12a1c0bf3c9e2c5eb3ce8ced6f2ba"} Feb 16 15:05:17 crc kubenswrapper[4705]: I0216 15:05:17.369465 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.634906335 podStartE2EDuration="7.369430317s" podCreationTimestamp="2026-02-16 15:05:10 +0000 UTC" firstStartedPulling="2026-02-16 15:05:13.290502906 +0000 UTC m=+707.475479982" lastFinishedPulling="2026-02-16 15:05:17.025026888 +0000 UTC m=+711.210003964" observedRunningTime="2026-02-16 15:05:17.357189774 +0000 UTC m=+711.542166870" watchObservedRunningTime="2026-02-16 15:05:17.369430317 +0000 UTC m=+711.554407433" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.484177 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.502800 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.508387 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516187 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516595 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-z98xc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.517542 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.623986 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624067 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.654317 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.655611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658261 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658545 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658690 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.678511 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726858 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726936 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727003 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727066 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727130 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727258 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.728760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.729513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.743951 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.744029 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.769178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.804674 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.806514 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.813576 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.815927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.816177 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834411 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834527 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.835793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.836821 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.846412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.847916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.849044 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.849131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.871869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.913658 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.917611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.925830 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.938000 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.938125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.945757 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.955422 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.964814 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.967904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.970324 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-lvbwz" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.995998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.997965 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.044574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045182 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045482 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045559 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046299 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046773 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.047082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.047912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.051239 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.051308 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.065329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148836 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148861 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149077 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149190 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.150792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.150950 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.151329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.151559 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152427 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.153198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.156152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.157013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.157032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.161894 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.169417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.170099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.172457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.180232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.220333 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.248003 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.327248 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.350290 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.361585 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd10ec10_e122_430f_afaf_b0b8222a6b15.slice/crio-ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653 WatchSource:0}: Error finding container ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653: Status 404 returned error can't find the container with id ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.397565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" event={"ID":"dd10ec10-e122-430f-afaf-b0b8222a6b15","Type":"ContainerStarted","Data":"ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653"} Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.469231 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.478764 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb0e04c_e741_4dbe_8c09_94379b736809.slice/crio-5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4 WatchSource:0}: Error finding container 5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4: Status 404 returned error can't find the container with id 5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.497599 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.509660 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e2f02fa_7b78_49ef_8c1a_f9cf7387e063.slice/crio-138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3 WatchSource:0}: Error finding container 138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3: Status 404 returned error can't find the container with id 138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.588816 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.594587 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1223933_4ce9_41dd_9c8a_14a59b540e20.slice/crio-cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb WatchSource:0}: Error finding container cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb: Status 404 returned error can't find the container with id cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.643577 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.649016 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85ad7e0_59d0_412d_96e1_298020ef9927.slice/crio-ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066 WatchSource:0}: Error finding container ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066: Status 404 returned error can't find the container with id ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.666795 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.667905 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.673797 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.680082 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.682564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.745262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.751028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.754452 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.754717 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.759499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.769447 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860873 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860980 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861055 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861085 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861548 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.862687 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.868771 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.868828 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8f3f970af946958ef14fb10954f50fbe9bc4c87a801d5543e394c89c77251a3/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.873585 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.873995 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.876894 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.913681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962709 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962962 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963278 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963807 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965702 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969338 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.970048 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.970079 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1fc7625b610fde2ebde857343bbc163e776be4c7204cb9706d02837e83df33a1/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.989260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.996025 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067782 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067812 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067907 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.072058 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073165 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.074560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.075535 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.075577 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/059f9482ade21a6fab869ffa328de857655647028e0e091ba883de990e9a2058/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076238 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076477 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc2d44c44bc077227e9eda49f371df5d5070e788785d311c4369b7064adf81c1/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.077036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.078334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.079171 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.082250 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.087795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.089036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.105576 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.120853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.182050 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.286249 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.408031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.410126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.411966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" event={"ID":"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063","Type":"ContainerStarted","Data":"138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.413065 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.416685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" event={"ID":"feb0e04c-e741-4dbe-8c09-94379b736809","Type":"ContainerStarted","Data":"5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.474039 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.645628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.781326 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: W0216 15:05:24.788725 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a1922a4_a6c5_4187_bcd3_f0e05f3e4fcf.slice/crio-dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09 WatchSource:0}: Error finding container dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09: Status 404 returned error can't find the container with id dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09 Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.430324 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4cde3c29-9511-489b-9849-468cae07d312","Type":"ContainerStarted","Data":"bb3a76b2644e8f453b7e65d0c2d2642f518adf93a2cf1d2006bbcb5508c311db"} Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.433816 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cd14a989-22ac-46cb-9295-a99e2043542b","Type":"ContainerStarted","Data":"bd17767eb420cb50a7476afdf3953375c02baa3f84459d005d6c6c70fe4c62f4"} Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.435980 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf","Type":"ContainerStarted","Data":"dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.478337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"2cb9e33a6b308fe859d480ca4f85b284a9ec0d1dc5815b682ebc4fa41358c9de"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.480949 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" event={"ID":"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063","Type":"ContainerStarted","Data":"25fe34bc2dee89b56b8d1066434a686c8212d3548445b5b18842e2f636bed49e"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.481102 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.483843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf","Type":"ContainerStarted","Data":"e36adcc1845222ea6aabc0798a461dfac9fbf69bed9f414f2135f9de9465bd81"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.484072 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.486727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" event={"ID":"feb0e04c-e741-4dbe-8c09-94379b736809","Type":"ContainerStarted","Data":"0898983eaade81e3c16cf2dae23355d2b43d67c7e538771db63091f6b8a2b4ff"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.486835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.489659 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" event={"ID":"dd10ec10-e122-430f-afaf-b0b8222a6b15","Type":"ContainerStarted","Data":"685a660f6ce2c485929ed6aab815066ba00e70fbe828e19dfcbb2b7db3c335a4"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.489842 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.492665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4cde3c29-9511-489b-9849-468cae07d312","Type":"ContainerStarted","Data":"5293a3d9145ebec32fc251c6042e31ae49e08aba9738d0ff05c45795e9a16324"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.492918 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.522031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cd14a989-22ac-46cb-9295-a99e2043542b","Type":"ContainerStarted","Data":"170f3d9bf403028a4c045add8161ddaac6745f8ea7595405bc39129af89c463d"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.522642 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.535166 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"8f97c6db444154127ed16344b1036c667cea185ec37143df51a58b08d6c19332"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.546655 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" podStartSLOduration=2.794890918 podStartE2EDuration="6.546622806s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.513388814 +0000 UTC m=+717.698365890" lastFinishedPulling="2026-02-16 15:05:27.265120702 +0000 UTC m=+721.450097778" observedRunningTime="2026-02-16 15:05:28.526307029 +0000 UTC m=+722.711284175" watchObservedRunningTime="2026-02-16 15:05:28.546622806 +0000 UTC m=+722.731599892" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.571216 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=4.040354009 podStartE2EDuration="6.571188143s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.793149798 +0000 UTC m=+718.978126864" lastFinishedPulling="2026-02-16 15:05:27.323983882 +0000 UTC m=+721.508960998" observedRunningTime="2026-02-16 15:05:28.555300032 +0000 UTC m=+722.740277148" watchObservedRunningTime="2026-02-16 15:05:28.571188143 +0000 UTC m=+722.756165229" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.596458 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" podStartSLOduration=2.745114645 podStartE2EDuration="6.596425539s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.511238873 +0000 UTC m=+717.696215949" lastFinishedPulling="2026-02-16 15:05:27.362549757 +0000 UTC m=+721.547526843" observedRunningTime="2026-02-16 15:05:28.580770875 +0000 UTC m=+722.765747961" watchObservedRunningTime="2026-02-16 15:05:28.596425539 +0000 UTC m=+722.781402625" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.621835 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" podStartSLOduration=2.625216843 podStartE2EDuration="6.621796019s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.363660305 +0000 UTC m=+717.548637381" lastFinishedPulling="2026-02-16 15:05:27.360239461 +0000 UTC m=+721.545216557" observedRunningTime="2026-02-16 15:05:28.616246472 +0000 UTC m=+722.801223548" watchObservedRunningTime="2026-02-16 15:05:28.621796019 +0000 UTC m=+722.806773105" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.638465 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.811647099 podStartE2EDuration="6.638444881s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.497243312 +0000 UTC m=+718.682220388" lastFinishedPulling="2026-02-16 15:05:27.324041054 +0000 UTC m=+721.509018170" observedRunningTime="2026-02-16 15:05:28.636707972 +0000 UTC m=+722.821685068" watchObservedRunningTime="2026-02-16 15:05:28.638444881 +0000 UTC m=+722.823421967" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.662777 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.967139761 podStartE2EDuration="6.662747231s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.666174725 +0000 UTC m=+718.851151801" lastFinishedPulling="2026-02-16 15:05:27.361782175 +0000 UTC m=+721.546759271" observedRunningTime="2026-02-16 15:05:28.65776719 +0000 UTC m=+722.842744266" watchObservedRunningTime="2026-02-16 15:05:28.662747231 +0000 UTC m=+722.847724317" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"296f13b1c0b8a02f4c5d212dba05858a4c73d4651a8c9a90edaf9faaf60cb797"} Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567940 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.573839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"731fd9dff81616c1c76942fc4486fb7cfc52e188ed984242d050050f730d7cc6"} Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.577656 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.577899 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.585321 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.588302 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.592517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.602573 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" podStartSLOduration=2.601729426 podStartE2EDuration="8.602537684s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.65494188 +0000 UTC m=+717.839918946" lastFinishedPulling="2026-02-16 15:05:29.655750128 +0000 UTC m=+723.840727204" observedRunningTime="2026-02-16 15:05:30.597739328 +0000 UTC m=+724.782716434" watchObservedRunningTime="2026-02-16 15:05:30.602537684 +0000 UTC m=+724.787514810" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.609274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.664394 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" podStartSLOduration=2.613701886 podStartE2EDuration="8.664345867s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.601182945 +0000 UTC m=+717.786160021" lastFinishedPulling="2026-02-16 15:05:29.651826926 +0000 UTC m=+723.836804002" observedRunningTime="2026-02-16 15:05:30.659282724 +0000 UTC m=+724.844259800" watchObservedRunningTime="2026-02-16 15:05:30.664345867 +0000 UTC m=+724.849322943" Feb 16 15:05:31 crc kubenswrapper[4705]: I0216 15:05:31.684954 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:05:31 crc kubenswrapper[4705]: I0216 15:05:31.685536 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:05:42 crc kubenswrapper[4705]: I0216 15:05:42.859811 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:43 crc kubenswrapper[4705]: I0216 15:05:43.004313 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:43 crc kubenswrapper[4705]: I0216 15:05:43.230708 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.190154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.294167 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.294244 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.428734 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:54 crc kubenswrapper[4705]: I0216 15:05:54.292735 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 15:05:54 crc kubenswrapper[4705]: I0216 15:05:54.293537 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:00 crc kubenswrapper[4705]: I0216 15:06:00.330436 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684167 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684316 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.685260 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.685363 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" gracePeriod=600 Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896230 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" exitCode=0 Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896764 4705 scope.go:117] "RemoveContainer" containerID="8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" Feb 16 15:06:02 crc kubenswrapper[4705]: I0216 15:06:02.907081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} Feb 16 15:06:04 crc kubenswrapper[4705]: I0216 15:06:04.292859 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 15:06:04 crc kubenswrapper[4705]: I0216 15:06:04.294560 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:14 crc kubenswrapper[4705]: I0216 15:06:14.293447 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 15:06:14 crc kubenswrapper[4705]: I0216 15:06:14.294743 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:24 crc kubenswrapper[4705]: I0216 15:06:24.292309 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.928447 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.930762 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.934237 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942533 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942638 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lf4hf" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942841 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.943353 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.948580 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.965802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.965903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966281 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966680 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.971650 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.021250 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.022041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-wjnzr metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-rgfsg" podUID="d8d377fe-28fb-4403-97b4-c34aae8f2c09" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.068785 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069018 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069562 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069618 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069760 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.069786 4705 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.069953 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics podName:d8d377fe-28fb-4403-97b4-c34aae8f2c09 nodeName:}" failed. No retries permitted until 2026-02-16 15:06:42.569902826 +0000 UTC m=+796.754879912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics") pod "collector-rgfsg" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09") : secret "collector-metrics" not found Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.071741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.078238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.078350 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.080785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.099798 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.100793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.311356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.328192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376177 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376306 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir" (OuterVolumeSpecName: "datadir") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376476 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config" (OuterVolumeSpecName: "config") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377356 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377568 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377729 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377823 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379174 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379225 4705 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379249 4705 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379274 4705 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379300 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.380241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp" (OuterVolumeSpecName: "tmp") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.380587 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token" (OuterVolumeSpecName: "sa-token") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.382211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.382998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr" (OuterVolumeSpecName: "kube-api-access-wjnzr") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "kube-api-access-wjnzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.384605 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token" (OuterVolumeSpecName: "collector-token") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481665 4705 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481712 4705 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481729 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481748 4705 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481762 4705 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.583640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.587285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.685623 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.690926 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics" (OuterVolumeSpecName: "metrics") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.789152 4705 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.323143 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.399073 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.408351 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.416078 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.417583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.422460 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lf4hf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.424227 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.424668 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.425711 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.425867 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.426532 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.436590 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505963 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506039 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506219 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506535 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506587 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506651 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506798 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609707 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609826 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610458 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610494 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611868 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.615656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.616462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.622082 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.629042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.632069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.642829 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.742253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rv6rf" Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.039241 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.335290 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rv6rf" event={"ID":"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9","Type":"ContainerStarted","Data":"9a89ecd25a509c86c44efe042de8993269b255cdfdcebd9e9c00fda36d971aee"} Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.435010 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d377fe-28fb-4403-97b4-c34aae8f2c09" path="/var/lib/kubelet/pods/d8d377fe-28fb-4403-97b4-c34aae8f2c09/volumes" Feb 16 15:06:51 crc kubenswrapper[4705]: I0216 15:06:51.398612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rv6rf" event={"ID":"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9","Type":"ContainerStarted","Data":"d180b29398284a3458b20aa464fcb3e1345b711a067a14d34a87b057e213eee5"} Feb 16 15:06:51 crc kubenswrapper[4705]: I0216 15:06:51.434149 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-rv6rf" podStartSLOduration=1.653876648 podStartE2EDuration="8.434118242s" podCreationTimestamp="2026-02-16 15:06:43 +0000 UTC" firstStartedPulling="2026-02-16 15:06:44.051073884 +0000 UTC m=+798.236050970" lastFinishedPulling="2026-02-16 15:06:50.831315478 +0000 UTC m=+805.016292564" observedRunningTime="2026-02-16 15:06:51.427666069 +0000 UTC m=+805.612643185" watchObservedRunningTime="2026-02-16 15:06:51.434118242 +0000 UTC m=+805.619095348" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.215361 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.217621 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.219446 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.226499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236134 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236161 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338298 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.362961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.547487 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.069137 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737273 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="1563375574eb2b8b91769c0d8f258af832ff8c1a14bd66b6ed209d680a889ede" exitCode=0 Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"1563375574eb2b8b91769c0d8f258af832ff8c1a14bd66b6ed209d680a889ede"} Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerStarted","Data":"ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1"} Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.562149 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.563957 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578634 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578741 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.585583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681126 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.682008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.711539 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.880152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.300748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.754510 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" exitCode=0 Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.754852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09"} Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.755010 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerStarted","Data":"e1f2ace940038734299af510330a0ecb19a41c91fefa525c71d6e5edc9c59bea"} Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.757650 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="b8b771a80b5e3cd43e27f54f4c0b684dd43dbf6f9cae0337e32463d6b69962cc" exitCode=0 Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.757712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"b8b771a80b5e3cd43e27f54f4c0b684dd43dbf6f9cae0337e32463d6b69962cc"} Feb 16 15:07:27 crc kubenswrapper[4705]: I0216 15:07:27.770626 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="e93f9485ae8ff2f76d28f1342b41935818058bb14972c7d8a19feeb546abf353" exitCode=0 Feb 16 15:07:27 crc kubenswrapper[4705]: I0216 15:07:27.770703 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"e93f9485ae8ff2f76d28f1342b41935818058bb14972c7d8a19feeb546abf353"} Feb 16 15:07:28 crc kubenswrapper[4705]: I0216 15:07:28.780396 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" exitCode=0 Feb 16 15:07:28 crc kubenswrapper[4705]: I0216 15:07:28.780482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.184185 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195146 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle" (OuterVolumeSpecName: "bundle") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.196040 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.212613 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf" (OuterVolumeSpecName: "kube-api-access-hkkdf") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "kube-api-access-hkkdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.223263 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util" (OuterVolumeSpecName: "util") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.297823 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.297859 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794567 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794055 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.798161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerStarted","Data":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.844626 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2m7d" podStartSLOduration=2.367610545 podStartE2EDuration="4.84459558s" podCreationTimestamp="2026-02-16 15:07:25 +0000 UTC" firstStartedPulling="2026-02-16 15:07:26.756612397 +0000 UTC m=+840.941589473" lastFinishedPulling="2026-02-16 15:07:29.233597442 +0000 UTC m=+843.418574508" observedRunningTime="2026-02-16 15:07:29.833436946 +0000 UTC m=+844.018414042" watchObservedRunningTime="2026-02-16 15:07:29.84459558 +0000 UTC m=+844.029572696" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.029040 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.029945 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="util" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.029973 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="util" Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.030015 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030029 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.030072 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="pull" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030088 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="pull" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030366 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.031513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.040816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.041176 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.041235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4l42x" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.044758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.059315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.162070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.190404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.396729 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.932646 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: W0216 15:07:33.940574 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2d83f82_a3e4_4937_8484_5f8174b5d986.slice/crio-bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40 WatchSource:0}: Error finding container bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40: Status 404 returned error can't find the container with id bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40 Feb 16 15:07:34 crc kubenswrapper[4705]: I0216 15:07:34.841904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" event={"ID":"b2d83f82-a3e4-4937-8484-5f8174b5d986","Type":"ContainerStarted","Data":"bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40"} Feb 16 15:07:35 crc kubenswrapper[4705]: I0216 15:07:35.880717 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:35 crc kubenswrapper[4705]: I0216 15:07:35.880804 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.860361 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" event={"ID":"b2d83f82-a3e4-4937-8484-5f8174b5d986","Type":"ContainerStarted","Data":"e751a2f010613d7e9387c73d0de7f4ffb7383aa7b995a971d3716eaf7056bbc0"} Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.885244 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" podStartSLOduration=1.623486281 podStartE2EDuration="3.885215488s" podCreationTimestamp="2026-02-16 15:07:33 +0000 UTC" firstStartedPulling="2026-02-16 15:07:33.943596832 +0000 UTC m=+848.128573908" lastFinishedPulling="2026-02-16 15:07:36.205326039 +0000 UTC m=+850.390303115" observedRunningTime="2026-02-16 15:07:36.880982329 +0000 UTC m=+851.065959445" watchObservedRunningTime="2026-02-16 15:07:36.885215488 +0000 UTC m=+851.070192574" Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.951101 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2m7d" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" probeResult="failure" output=< Feb 16 15:07:36 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:07:36 crc kubenswrapper[4705]: > Feb 16 15:07:37 crc kubenswrapper[4705]: I0216 15:07:37.996247 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:37 crc kubenswrapper[4705]: I0216 15:07:37.998720 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.004125 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9w6g2" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.021584 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.022717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.024036 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.030885 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.044036 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.059557 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wr89v"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.060896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.082901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.082959 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083727 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083832 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186123 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186262 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186649 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: E0216 15:07:38.186908 4705 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186945 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: E0216 15:07:38.187130 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair podName:7a87077c-c5fa-4c92-9c08-44dcf11d38c7 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:38.687050634 +0000 UTC m=+852.872027710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair") pod "nmstate-webhook-866bcb46dc-9kf74" (UID: "7a87077c-c5fa-4c92-9c08-44dcf11d38c7") : secret "openshift-nmstate-webhook" not found Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186881 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.187333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.187333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.207262 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.208162 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.233171 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.234550 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.239993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.245595 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-n6nx6" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.245911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.246032 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.261065 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.289631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.289986 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.290094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.313825 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391845 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.393420 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.395793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.400592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.423401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.518473 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.519691 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.548707 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.581995 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610497 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610691 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.700280 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714382 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714486 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714638 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714690 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.715663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.716320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.716968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.717490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.722024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.725101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.733232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.737764 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.871492 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.886705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"6f7a728c5618f61e0146cac924dd9fe784b169741f507663012bea8f022dd605"} Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.887432 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wr89v" event={"ID":"9ffb9d03-b8ea-44ff-9397-58b55c367d89","Type":"ContainerStarted","Data":"599d529d283ad9de645b516a35c0da9fc33387a655b9f75358257141a4589cc7"} Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.987948 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.045752 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.151238 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:39 crc kubenswrapper[4705]: W0216 15:07:39.156745 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ab25c9f_91f2_46f2_8abf_5004d8c114ad.slice/crio-2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c WatchSource:0}: Error finding container 2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c: Status 404 returned error can't find the container with id 2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.455046 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:39 crc kubenswrapper[4705]: W0216 15:07:39.468573 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a87077c_c5fa_4c92_9c08_44dcf11d38c7.slice/crio-697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860 WatchSource:0}: Error finding container 697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860: Status 404 returned error can't find the container with id 697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860 Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.899529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" event={"ID":"303c8298-3e10-49e8-96b1-ed1dafcd23e3","Type":"ContainerStarted","Data":"68ab50ae95fb3d694a0e114b2affc564cc66fb9050b1a9369d0aed0c4ae98248"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.901575 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerStarted","Data":"b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.901626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerStarted","Data":"2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.902877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" event={"ID":"7a87077c-c5fa-4c92-9c08-44dcf11d38c7","Type":"ContainerStarted","Data":"697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.935347 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cb874789d-44cjq" podStartSLOduration=1.935322312 podStartE2EDuration="1.935322312s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:07:39.920845664 +0000 UTC m=+854.105822740" watchObservedRunningTime="2026-02-16 15:07:39.935322312 +0000 UTC m=+854.120299378" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.930217 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" event={"ID":"303c8298-3e10-49e8-96b1-ed1dafcd23e3","Type":"ContainerStarted","Data":"4335750875ffcfddd6b580d5c1b6a01cf4c9c2647d4dca7e11785b83b74789dd"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.933799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"87c0a4a38a3527738c8fc86bfbb9bd1497ea4e303ff49ae76c89ec3e2ed5179c"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.934805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wr89v" event={"ID":"9ffb9d03-b8ea-44ff-9397-58b55c367d89","Type":"ContainerStarted","Data":"bd4e22d200cae623261a9cf00dc7ed365e2e8924f3e3dc3230d4d52b9e3991f7"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.935457 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.936775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" event={"ID":"7a87077c-c5fa-4c92-9c08-44dcf11d38c7","Type":"ContainerStarted","Data":"e2ce8baf20d28c1cd4837600472d5359888ee260750d0dc7cf0c939f9ed62077"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.937197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.948732 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" podStartSLOduration=1.485368711 podStartE2EDuration="4.94870421s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="2026-02-16 15:07:39.078173176 +0000 UTC m=+853.263150292" lastFinishedPulling="2026-02-16 15:07:42.541508715 +0000 UTC m=+856.726485791" observedRunningTime="2026-02-16 15:07:42.945285113 +0000 UTC m=+857.130262199" watchObservedRunningTime="2026-02-16 15:07:42.94870421 +0000 UTC m=+857.133681286" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.996447 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wr89v" podStartSLOduration=0.875902487 podStartE2EDuration="4.996423615s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="2026-02-16 15:07:38.441185576 +0000 UTC m=+852.626162642" lastFinishedPulling="2026-02-16 15:07:42.561706654 +0000 UTC m=+856.746683770" observedRunningTime="2026-02-16 15:07:42.993086301 +0000 UTC m=+857.178063387" watchObservedRunningTime="2026-02-16 15:07:42.996423615 +0000 UTC m=+857.181400691" Feb 16 15:07:43 crc kubenswrapper[4705]: I0216 15:07:43.020644 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" podStartSLOduration=2.937500152 podStartE2EDuration="6.020620416s" podCreationTimestamp="2026-02-16 15:07:37 +0000 UTC" firstStartedPulling="2026-02-16 15:07:39.471310185 +0000 UTC m=+853.656287261" lastFinishedPulling="2026-02-16 15:07:42.554430449 +0000 UTC m=+856.739407525" observedRunningTime="2026-02-16 15:07:43.018892928 +0000 UTC m=+857.203870034" watchObservedRunningTime="2026-02-16 15:07:43.020620416 +0000 UTC m=+857.205597492" Feb 16 15:07:45 crc kubenswrapper[4705]: I0216 15:07:45.946317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:45 crc kubenswrapper[4705]: I0216 15:07:45.975467 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"4c55f81ad54ee5ca366362a72b88322d21a6e26c67aba10f5f500392f60a07a4"} Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.012732 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" podStartSLOduration=2.482258383 podStartE2EDuration="9.012688424s" podCreationTimestamp="2026-02-16 15:07:37 +0000 UTC" firstStartedPulling="2026-02-16 15:07:38.704508616 +0000 UTC m=+852.889485692" lastFinishedPulling="2026-02-16 15:07:45.234938657 +0000 UTC m=+859.419915733" observedRunningTime="2026-02-16 15:07:46.006824779 +0000 UTC m=+860.191801865" watchObservedRunningTime="2026-02-16 15:07:46.012688424 +0000 UTC m=+860.197665550" Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.035446 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.202694 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.984749 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2m7d" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" containerID="cri-o://ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" gracePeriod=2 Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.462773 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623082 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623459 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.624297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities" (OuterVolumeSpecName: "utilities") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.630533 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn" (OuterVolumeSpecName: "kube-api-access-5jcwn") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "kube-api-access-5jcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.726708 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.726756 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.785900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.828783 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998478 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" exitCode=0 Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998545 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998547 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998658 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"e1f2ace940038734299af510330a0ecb19a41c91fefa525c71d6e5edc9c59bea"} Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998702 4705 scope.go:117] "RemoveContainer" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.038179 4705 scope.go:117] "RemoveContainer" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.054458 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.068013 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.072691 4705 scope.go:117] "RemoveContainer" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.112511 4705 scope.go:117] "RemoveContainer" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.113302 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": container with ID starting with ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8 not found: ID does not exist" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.113361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} err="failed to get container status \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": rpc error: code = NotFound desc = could not find container \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": container with ID starting with ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8 not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.113438 4705 scope.go:117] "RemoveContainer" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.114927 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": container with ID starting with b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f not found: ID does not exist" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.114959 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f"} err="failed to get container status \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": rpc error: code = NotFound desc = could not find container \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": container with ID starting with b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.114977 4705 scope.go:117] "RemoveContainer" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.115413 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": container with ID starting with 6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09 not found: ID does not exist" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.115476 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09"} err="failed to get container status \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": rpc error: code = NotFound desc = could not find container \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": container with ID starting with 6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09 not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.435380 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" path="/var/lib/kubelet/pods/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98/volumes" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.436718 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.872418 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.873144 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.878098 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:49 crc kubenswrapper[4705]: I0216 15:07:49.017556 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:49 crc kubenswrapper[4705]: I0216 15:07:49.142848 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:07:58 crc kubenswrapper[4705]: I0216 15:07:58.995962 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:08:01 crc kubenswrapper[4705]: I0216 15:08:01.684191 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:08:01 crc kubenswrapper[4705]: I0216 15:08:01.685033 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.220912 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bb776c56c-pzs4q" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" containerID="cri-o://ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" gracePeriod=15 Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.649933 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb776c56c-pzs4q_80172f35-e30c-409c-b28e-eb65d41dd384/console/0.log" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.650271 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822355 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822422 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822507 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822572 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822604 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823458 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823466 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config" (OuterVolumeSpecName: "console-config") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca" (OuterVolumeSpecName: "service-ca") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.828569 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.830184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.830245 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc" (OuterVolumeSpecName: "kube-api-access-4cxdc") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "kube-api-access-4cxdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924853 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924907 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924920 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924930 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924941 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924950 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924960 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.277727 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb776c56c-pzs4q_80172f35-e30c-409c-b28e-eb65d41dd384/console/0.log" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278118 4705 generic.go:334] "Generic (PLEG): container finished" podID="80172f35-e30c-409c-b28e-eb65d41dd384" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" exitCode=2 Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerDied","Data":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerDied","Data":"62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b"} Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278218 4705 scope.go:117] "RemoveContainer" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278244 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.304136 4705 scope.go:117] "RemoveContainer" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: E0216 15:08:15.305204 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": container with ID starting with ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8 not found: ID does not exist" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.305270 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} err="failed to get container status \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": rpc error: code = NotFound desc = could not find container \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": container with ID starting with ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8 not found: ID does not exist" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.324433 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.335270 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:08:16 crc kubenswrapper[4705]: I0216 15:08:16.435773 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" path="/var/lib/kubelet/pods/80172f35-e30c-409c-b28e-eb65d41dd384/volumes" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.564904 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566317 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566343 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566362 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566447 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566480 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-content" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566494 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-content" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566540 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-utilities" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566557 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-utilities" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566845 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566910 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.569151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.572297 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.584836 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.662267 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.662953 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.663177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.765161 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.765168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.791248 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.904809 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:22 crc kubenswrapper[4705]: I0216 15:08:22.400161 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354183 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="732e7d5edc8cbe6eac47f114645abab9e9e240ce74b091e22a5b56434835e6f8" exitCode=0 Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"732e7d5edc8cbe6eac47f114645abab9e9e240ce74b091e22a5b56434835e6f8"} Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerStarted","Data":"6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329"} Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.361785 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:08:25 crc kubenswrapper[4705]: I0216 15:08:25.377153 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="edcb1e5aa6fa94e358515db7cb9dbc37da590533be6f6f1573aa9dc92a7e51ea" exitCode=0 Feb 16 15:08:25 crc kubenswrapper[4705]: I0216 15:08:25.377213 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"edcb1e5aa6fa94e358515db7cb9dbc37da590533be6f6f1573aa9dc92a7e51ea"} Feb 16 15:08:26 crc kubenswrapper[4705]: I0216 15:08:26.393480 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="6076b29932cbc2ee4abdbcae88d98e92e177a415ba13b846a51bb7f7be06afc1" exitCode=0 Feb 16 15:08:26 crc kubenswrapper[4705]: I0216 15:08:26.393564 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"6076b29932cbc2ee4abdbcae88d98e92e177a415ba13b846a51bb7f7be06afc1"} Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.754820 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.896525 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.897048 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.897083 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.898924 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle" (OuterVolumeSpecName: "bundle") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.904597 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz" (OuterVolumeSpecName: "kube-api-access-g6cwz") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "kube-api-access-g6cwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.910817 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util" (OuterVolumeSpecName: "util") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.999640 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.999988 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.000067 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417017 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329"} Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417466 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417120 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:31 crc kubenswrapper[4705]: I0216 15:08:31.684539 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:08:31 crc kubenswrapper[4705]: I0216 15:08:31.685116 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.047816 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048420 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="pull" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048434 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="pull" Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048452 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="util" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="util" Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048473 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048479 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048619 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.049210 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.059252 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060008 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fxwcf" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060385 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060621 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.066822 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.069402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.157761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.158111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.158269 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.259962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.260041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.260102 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.277384 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.277632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.296302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.368805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.496307 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.504622 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.514254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vp6v6" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.517295 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.517300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.531102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669519 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669568 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.771803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.772249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.772549 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.788392 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.788457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.792528 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.823629 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.105872 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.350553 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:37 crc kubenswrapper[4705]: W0216 15:08:37.353118 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624f7ca8_2011_4ed6_9ee2_24acddf29390.slice/crio-f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016 WatchSource:0}: Error finding container f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016: Status 404 returned error can't find the container with id f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016 Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.512339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" event={"ID":"624f7ca8-2011-4ed6-9ee2-24acddf29390","Type":"ContainerStarted","Data":"f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016"} Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.514071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" event={"ID":"55ce7b61-e1e6-483d-a84f-7ea168ef9672","Type":"ContainerStarted","Data":"16142500a433faeb385d51a91bf4850751e8a7de8beb1533dead43d43fe04733"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.587492 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" event={"ID":"55ce7b61-e1e6-483d-a84f-7ea168ef9672","Type":"ContainerStarted","Data":"eb67781b4e2d597f45941a4c01c4dc97651e53cdcbc73517154a09a9fb67f78b"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.588128 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.589550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" event={"ID":"624f7ca8-2011-4ed6-9ee2-24acddf29390","Type":"ContainerStarted","Data":"ad89b43d796bb1caa6af788754c64e20a0bf58cb897d1d0dc1437582e86ad286"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.615295 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" podStartSLOduration=2.527992671 podStartE2EDuration="7.61527352s" podCreationTimestamp="2026-02-16 15:08:36 +0000 UTC" firstStartedPulling="2026-02-16 15:08:37.162596041 +0000 UTC m=+911.347573117" lastFinishedPulling="2026-02-16 15:08:42.24987688 +0000 UTC m=+916.434853966" observedRunningTime="2026-02-16 15:08:43.610693589 +0000 UTC m=+917.795670685" watchObservedRunningTime="2026-02-16 15:08:43.61527352 +0000 UTC m=+917.800250596" Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.647248 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" podStartSLOduration=2.729781863 podStartE2EDuration="7.64721903s" podCreationTimestamp="2026-02-16 15:08:36 +0000 UTC" firstStartedPulling="2026-02-16 15:08:37.356236782 +0000 UTC m=+911.541213858" lastFinishedPulling="2026-02-16 15:08:42.273673949 +0000 UTC m=+916.458651025" observedRunningTime="2026-02-16 15:08:43.640685804 +0000 UTC m=+917.825662890" watchObservedRunningTime="2026-02-16 15:08:43.64721903 +0000 UTC m=+917.832196116" Feb 16 15:08:44 crc kubenswrapper[4705]: I0216 15:08:44.598262 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:56 crc kubenswrapper[4705]: I0216 15:08:56.828865 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.541846 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.543750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.559002 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704058 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704534 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704617 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806818 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806914 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.807553 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.807645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.841301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.874236 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:58 crc kubenswrapper[4705]: I0216 15:08:58.541507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:58 crc kubenswrapper[4705]: I0216 15:08:58.709387 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"6ec3342b21b192f9b022f679237496c74782a9c984fb5bddd5c4b789c2bdab1f"} Feb 16 15:08:59 crc kubenswrapper[4705]: I0216 15:08:59.721123 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" exitCode=0 Feb 16 15:08:59 crc kubenswrapper[4705]: I0216 15:08:59.721521 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22"} Feb 16 15:09:00 crc kubenswrapper[4705]: I0216 15:09:00.733694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684140 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684244 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684324 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.685592 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.685735 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" gracePeriod=600 Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.747593 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" exitCode=0 Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.747657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.760437 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" exitCode=0 Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.760494 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.761449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.761475 4705 scope.go:117] "RemoveContainer" containerID="66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.768357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.875418 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.876054 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.919218 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.939277 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9pdg" podStartSLOduration=8.355613402 podStartE2EDuration="10.939257306s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="2026-02-16 15:08:59.72352786 +0000 UTC m=+933.908504946" lastFinishedPulling="2026-02-16 15:09:02.307171774 +0000 UTC m=+936.492148850" observedRunningTime="2026-02-16 15:09:02.809096775 +0000 UTC m=+936.994073861" watchObservedRunningTime="2026-02-16 15:09:07.939257306 +0000 UTC m=+942.124234382" Feb 16 15:09:08 crc kubenswrapper[4705]: I0216 15:09:08.878646 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:10 crc kubenswrapper[4705]: I0216 15:09:10.324617 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:10 crc kubenswrapper[4705]: I0216 15:09:10.853259 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9pdg" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" containerID="cri-o://26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" gracePeriod=2 Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.421059 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529242 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529445 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.531539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities" (OuterVolumeSpecName: "utilities") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.539115 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl" (OuterVolumeSpecName: "kube-api-access-6z5xl") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "kube-api-access-6z5xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.581437 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632534 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632609 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632640 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867459 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" exitCode=0 Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867537 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"6ec3342b21b192f9b022f679237496c74782a9c984fb5bddd5c4b789c2bdab1f"} Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867643 4705 scope.go:117] "RemoveContainer" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867637 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.887315 4705 scope.go:117] "RemoveContainer" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.908142 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.915687 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.920240 4705 scope.go:117] "RemoveContainer" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.940883 4705 scope.go:117] "RemoveContainer" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.944267 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": container with ID starting with 26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd not found: ID does not exist" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.944558 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} err="failed to get container status \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": rpc error: code = NotFound desc = could not find container \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": container with ID starting with 26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd not found: ID does not exist" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.944729 4705 scope.go:117] "RemoveContainer" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.945309 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": container with ID starting with f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc not found: ID does not exist" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945391 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} err="failed to get container status \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": rpc error: code = NotFound desc = could not find container \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": container with ID starting with f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc not found: ID does not exist" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945436 4705 scope.go:117] "RemoveContainer" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.945796 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": container with ID starting with 48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22 not found: ID does not exist" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945839 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22"} err="failed to get container status \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": rpc error: code = NotFound desc = could not find container \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": container with ID starting with 48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22 not found: ID does not exist" Feb 16 15:09:12 crc kubenswrapper[4705]: I0216 15:09:12.437112 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" path="/var/lib/kubelet/pods/e7fb1d1e-a675-4965-9698-79db7cb89697/volumes" Feb 16 15:09:16 crc kubenswrapper[4705]: I0216 15:09:16.371613 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.237409 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238340 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-content" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238446 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-content" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238575 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-utilities" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238656 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-utilities" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238735 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238805 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.239102 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.239996 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.245722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.246447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-84lgn" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.255877 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5znjj"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.259986 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.262080 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.262105 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.263490 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.338896 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nbgmf"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.342538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.344941 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345231 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xcfw5" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345382 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345560 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356152 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356357 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356485 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.366420 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.368092 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.370325 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.407719 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.458721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459047 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459557 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459633 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459792 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460106 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460446 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.460848 4705 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.460939 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs podName:06291746-6582-464c-9dff-b4b98a359885 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:17.96092443 +0000 UTC m=+952.145901506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs") pod "frr-k8s-5znjj" (UID: "06291746-6582-464c-9dff-b4b98a359885") : secret "frr-k8s-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461281 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.461291 4705 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.461468 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert podName:751baaae-9090-48b1-9bae-79b7527d6c02 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:17.961458675 +0000 UTC m=+952.146435741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert") pod "frr-k8s-webhook-server-78b44bf5bb-x4255" (UID: "751baaae-9090-48b1-9bae-79b7527d6c02") : secret "frr-k8s-webhook-server-cert" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.480225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.500416 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562652 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562721 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562757 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563232 4705 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563355 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs podName:493ad03c-5e3e-4726-9764-272f39f5aa37 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:18.06333752 +0000 UTC m=+952.248314596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs") pod "controller-69bbfbf88f-5p2db" (UID: "493ad03c-5e3e-4726-9764-272f39f5aa37") : secret "controller-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563257 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.563351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563521 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist podName:2536f291-dea1-4673-acf7-9beaffa87817 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:18.063512585 +0000 UTC m=+952.248489661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist") pod "speaker-nbgmf" (UID: "2536f291-dea1-4673-acf7-9beaffa87817") : secret "metallb-memberlist" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.567816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.586024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.588883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.601118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.970202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.970640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.974514 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.974903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.072084 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:18 crc kubenswrapper[4705]: E0216 15:09:18.072522 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:09:18 crc kubenswrapper[4705]: E0216 15:09:18.072668 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist podName:2536f291-dea1-4673-acf7-9beaffa87817 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:19.072636432 +0000 UTC m=+953.257613518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist") pod "speaker-nbgmf" (UID: "2536f291-dea1-4673-acf7-9beaffa87817") : secret "metallb-memberlist" not found Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.072732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.076814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.173554 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.184468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.314972 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.652248 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.731436 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.736452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.749769 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.790775 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:18 crc kubenswrapper[4705]: W0216 15:09:18.795665 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod493ad03c_5e3e_4726_9764_272f39f5aa37.slice/crio-93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc WatchSource:0}: Error finding container 93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc: Status 404 returned error can't find the container with id 93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.889804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.889884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.890303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.938277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" event={"ID":"751baaae-9090-48b1-9bae-79b7527d6c02","Type":"ContainerStarted","Data":"793f3ad530efaf39bacc4bfe77342b4c42e982f3ef4fd5c9f4be8b8dc92d9390"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.939604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.940920 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"92b2162e0f48fcca515baebca68c2d5aa4544c6953532c9adfa3dbe2968d7588"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992553 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.993192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.993309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.018424 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.067515 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.094601 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.098943 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.161632 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.571748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.961767 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.961733 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" exitCode=0 Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.962121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerStarted","Data":"f29a60f76f78314ae6f0243b56ba9336dc2a65e5f7b3d38a788960b018e46dc8"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"6c12bc25f2a60a4aee880cb078919c06df4b8de3118a7ae2017ae5c67d221f72"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"f4816f341107c3060505d53435ed97b6c6d9e99803ebb0268ac67463ddf586b2"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"04cc7ce51873b70b99303f445c7e94b5f5fcb72d05693c43a6e904b3e5e88f2a"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.972529 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"5eb2353ebfda386e81122e2d07c8766ba45e4afbc1f8523702af45415b969bf4"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977083 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"4fb4d5fd5e2b4eb26eb30f528546ce5ad47659d9d506c68881b0a513f6c5e8d9"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977201 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.023241 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nbgmf" podStartSLOduration=3.023216986 podStartE2EDuration="3.023216986s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:20.020461168 +0000 UTC m=+954.205438244" watchObservedRunningTime="2026-02-16 15:09:20.023216986 +0000 UTC m=+954.208194052" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.040619 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-5p2db" podStartSLOduration=3.040597162 podStartE2EDuration="3.040597162s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:20.03913058 +0000 UTC m=+954.224107656" watchObservedRunningTime="2026-02-16 15:09:20.040597162 +0000 UTC m=+954.225574238" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.995254 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" exitCode=0 Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.995688 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73"} Feb 16 15:09:22 crc kubenswrapper[4705]: I0216 15:09:22.011147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerStarted","Data":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} Feb 16 15:09:26 crc kubenswrapper[4705]: I0216 15:09:26.450911 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zth6f" podStartSLOduration=7.003202296 podStartE2EDuration="8.450881382s" podCreationTimestamp="2026-02-16 15:09:18 +0000 UTC" firstStartedPulling="2026-02-16 15:09:19.964241935 +0000 UTC m=+954.149219011" lastFinishedPulling="2026-02-16 15:09:21.411921021 +0000 UTC m=+955.596898097" observedRunningTime="2026-02-16 15:09:22.05478839 +0000 UTC m=+956.239765466" watchObservedRunningTime="2026-02-16 15:09:26.450881382 +0000 UTC m=+960.635858468" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.067728 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="b0eba72150775e1eda2c6ab0ac0dc2708448ef609b78997dde76ea7b87ee5681" exitCode=0 Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.068471 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"b0eba72150775e1eda2c6ab0ac0dc2708448ef609b78997dde76ea7b87ee5681"} Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.071548 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" event={"ID":"751baaae-9090-48b1-9bae-79b7527d6c02","Type":"ContainerStarted","Data":"de95d36f5c2de8b320d55953fec50186a6ab8e32f534acd984ffd3a5b9a0336e"} Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.071782 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.129117 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" podStartSLOduration=2.345289917 podStartE2EDuration="11.129084891s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="2026-02-16 15:09:18.67023586 +0000 UTC m=+952.855212936" lastFinishedPulling="2026-02-16 15:09:27.454030824 +0000 UTC m=+961.639007910" observedRunningTime="2026-02-16 15:09:28.116478272 +0000 UTC m=+962.301455388" watchObservedRunningTime="2026-02-16 15:09:28.129084891 +0000 UTC m=+962.314061977" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.320475 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.070628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.070942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.084422 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="36bc395f2f2fd2a8a7b9e39bbda23ccf4cc8a04b5fe04924c0feaaeaa6c5c84d" exitCode=0 Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.085832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"36bc395f2f2fd2a8a7b9e39bbda23ccf4cc8a04b5fe04924c0feaaeaa6c5c84d"} Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.162624 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.168154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.260215 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.724568 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:30 crc kubenswrapper[4705]: I0216 15:09:30.112259 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="2579975ed16e4b45dfa3bef1c777bf3fdb95652c7ead4055d0e82f27daedb0b7" exitCode=0 Feb 16 15:09:30 crc kubenswrapper[4705]: I0216 15:09:30.112464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"2579975ed16e4b45dfa3bef1c777bf3fdb95652c7ead4055d0e82f27daedb0b7"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.126439 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"1852a63991dc3e9a894aed9ddb064bb3d9a4d69e9db18e2a142ea37b17fd6331"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"ac4bf677259900dfde7bdeedca51debd8d12b29304a9143c53e7dc60ab251821"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"42e0332232b7a36b3d9580c5fe4d06a42bbeff722e569eb804f410b113854522"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"f1c16ad4524940644bda9374a4bfc51482be02e9b704219d1993c87d7703ffb0"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"b59781b6c3b2224974d529886ed0603072bcbeed73e567e8014c0d7ca1d530d7"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.126538 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zth6f" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" containerID="cri-o://9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" gracePeriod=2 Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.614609 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.777859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778596 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778626 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778891 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities" (OuterVolumeSpecName: "utilities") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.779673 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.788298 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc" (OuterVolumeSpecName: "kube-api-access-rv2sc") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "kube-api-access-rv2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.801602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.881200 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.881245 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.142174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"0947eb28a0ca7d84bdb8938d709066e9928c4dfea34b71403f0c5772e4088ae6"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.142503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146610 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" exitCode=0 Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"f29a60f76f78314ae6f0243b56ba9336dc2a65e5f7b3d38a788960b018e46dc8"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146750 4705 scope.go:117] "RemoveContainer" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146972 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.166900 4705 scope.go:117] "RemoveContainer" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.245704 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5znjj" podStartSLOduration=6.149947455 podStartE2EDuration="15.245681804s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="2026-02-16 15:09:18.335849735 +0000 UTC m=+952.520826811" lastFinishedPulling="2026-02-16 15:09:27.431584084 +0000 UTC m=+961.616561160" observedRunningTime="2026-02-16 15:09:32.214415303 +0000 UTC m=+966.399392379" watchObservedRunningTime="2026-02-16 15:09:32.245681804 +0000 UTC m=+966.430658880" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.246585 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.251874 4705 scope.go:117] "RemoveContainer" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.252638 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.284631 4705 scope.go:117] "RemoveContainer" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.288879 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": container with ID starting with 9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd not found: ID does not exist" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.288938 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} err="failed to get container status \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": rpc error: code = NotFound desc = could not find container \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": container with ID starting with 9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.288987 4705 scope.go:117] "RemoveContainer" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.347986 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": container with ID starting with 076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73 not found: ID does not exist" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.348067 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73"} err="failed to get container status \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": rpc error: code = NotFound desc = could not find container \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": container with ID starting with 076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73 not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.348116 4705 scope.go:117] "RemoveContainer" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.350060 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": container with ID starting with 3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b not found: ID does not exist" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.350124 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b"} err="failed to get container status \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": rpc error: code = NotFound desc = could not find container \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": container with ID starting with 3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.429643 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" path="/var/lib/kubelet/pods/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd/volumes" Feb 16 15:09:33 crc kubenswrapper[4705]: I0216 15:09:33.185496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:33 crc kubenswrapper[4705]: I0216 15:09:33.241839 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.730660 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731333 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-utilities" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731346 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-utilities" Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731400 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731407 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-content" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731431 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-content" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731577 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.732188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.735060 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.735164 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.738438 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-krnkd" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.743917 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.836126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.939143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.963426 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:35 crc kubenswrapper[4705]: I0216 15:09:35.053927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:35 crc kubenswrapper[4705]: I0216 15:09:35.595023 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:35 crc kubenswrapper[4705]: W0216 15:09:35.598500 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050e9b74_0e40_4a1a_8cb8_1ee038752bb6.slice/crio-7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef WatchSource:0}: Error finding container 7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef: Status 404 returned error can't find the container with id 7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef Feb 16 15:09:36 crc kubenswrapper[4705]: I0216 15:09:36.201470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtf6z" event={"ID":"050e9b74-0e40-4a1a-8cb8-1ee038752bb6","Type":"ContainerStarted","Data":"7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef"} Feb 16 15:09:38 crc kubenswrapper[4705]: I0216 15:09:38.184464 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:39 crc kubenswrapper[4705]: I0216 15:09:39.245497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtf6z" event={"ID":"050e9b74-0e40-4a1a-8cb8-1ee038752bb6","Type":"ContainerStarted","Data":"6105fd8b0dda2549ad134eeceae8eb65d69a3a77be1c4f9dd5149617fd46d539"} Feb 16 15:09:39 crc kubenswrapper[4705]: I0216 15:09:39.272892 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rtf6z" podStartSLOduration=2.547057474 podStartE2EDuration="5.272836522s" podCreationTimestamp="2026-02-16 15:09:34 +0000 UTC" firstStartedPulling="2026-02-16 15:09:35.605332534 +0000 UTC m=+969.790309620" lastFinishedPulling="2026-02-16 15:09:38.331111552 +0000 UTC m=+972.516088668" observedRunningTime="2026-02-16 15:09:39.270910438 +0000 UTC m=+973.455887574" watchObservedRunningTime="2026-02-16 15:09:39.272836522 +0000 UTC m=+973.457813648" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.054941 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.055861 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.097200 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.335779 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.383819 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.388061 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.390417 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-96tph" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.401120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543471 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.645953 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.646113 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.646229 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.647006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.647023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.667232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.710844 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:47 crc kubenswrapper[4705]: I0216 15:09:47.277932 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:47 crc kubenswrapper[4705]: I0216 15:09:47.326206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerStarted","Data":"b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9"} Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.190704 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.336156 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="c7d00f9d8b8279528eee34c4b4d573aa302d1c0e7f059cd1885fcea5b5543c4c" exitCode=0 Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.336317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"c7d00f9d8b8279528eee34c4b4d573aa302d1c0e7f059cd1885fcea5b5543c4c"} Feb 16 15:09:49 crc kubenswrapper[4705]: I0216 15:09:49.351235 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="ecfc92dd12e9735b0cf209b641dd7125c0e01d34f4b2c3cee044137d7e87a423" exitCode=0 Feb 16 15:09:49 crc kubenswrapper[4705]: I0216 15:09:49.351320 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"ecfc92dd12e9735b0cf209b641dd7125c0e01d34f4b2c3cee044137d7e87a423"} Feb 16 15:09:50 crc kubenswrapper[4705]: I0216 15:09:50.398988 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="1ec075e0b56c346b8aa17d7294bacadcf0d6aec224cca6ac22a5fa5b8bf01109" exitCode=0 Feb 16 15:09:50 crc kubenswrapper[4705]: I0216 15:09:50.399432 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"1ec075e0b56c346b8aa17d7294bacadcf0d6aec224cca6ac22a5fa5b8bf01109"} Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.710613 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.861838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.861985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.862113 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.862904 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle" (OuterVolumeSpecName: "bundle") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.868636 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9" (OuterVolumeSpecName: "kube-api-access-bz9l9") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "kube-api-access-bz9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.881292 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util" (OuterVolumeSpecName: "util") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964526 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964920 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964990 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.424472 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.441312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9"} Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.441398 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.149219 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150329 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150344 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150452 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="pull" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150459 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="pull" Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150476 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="util" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150483 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="util" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150643 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.151393 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.154427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zpvqs" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.172964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.243916 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.345383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.367405 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.471967 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.738032 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.740070 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.811185 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863694 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.965976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966819 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.967011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.996703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.044709 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.071385 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.475833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" event={"ID":"a8b2ba76-e9d9-404f-9859-22c40c63f1fb","Type":"ContainerStarted","Data":"b35be9f54d11b2a61633a473e64debec951b744404005198956a4f5b4f213f02"} Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.677208 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:56 crc kubenswrapper[4705]: W0216 15:09:56.697532 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf177069e_fdb0_44b5_a098_948bbb859bbc.slice/crio-d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722 WatchSource:0}: Error finding container d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722: Status 404 returned error can't find the container with id d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722 Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495354 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" exitCode=0 Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495750 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc"} Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerStarted","Data":"d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.535618 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.535722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.538407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" event={"ID":"a8b2ba76-e9d9-404f-9859-22c40c63f1fb","Type":"ContainerStarted","Data":"825c95f4f1de8d5d902374e685350cbaaecb434eb6759ce16fc24439c2ed116f"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.538684 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.611432 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" podStartSLOduration=1.796564769 podStartE2EDuration="6.61140816s" podCreationTimestamp="2026-02-16 15:09:55 +0000 UTC" firstStartedPulling="2026-02-16 15:09:56.051600979 +0000 UTC m=+990.236578055" lastFinishedPulling="2026-02-16 15:10:00.86644436 +0000 UTC m=+995.051421446" observedRunningTime="2026-02-16 15:10:01.610564126 +0000 UTC m=+995.795541232" watchObservedRunningTime="2026-02-16 15:10:01.61140816 +0000 UTC m=+995.796385246" Feb 16 15:10:02 crc kubenswrapper[4705]: I0216 15:10:02.554604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerStarted","Data":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} Feb 16 15:10:02 crc kubenswrapper[4705]: I0216 15:10:02.579137 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftls8" podStartSLOduration=2.963906813 podStartE2EDuration="7.579115421s" podCreationTimestamp="2026-02-16 15:09:55 +0000 UTC" firstStartedPulling="2026-02-16 15:09:57.499643706 +0000 UTC m=+991.684620782" lastFinishedPulling="2026-02-16 15:10:02.114852314 +0000 UTC m=+996.299829390" observedRunningTime="2026-02-16 15:10:02.572362009 +0000 UTC m=+996.757339105" watchObservedRunningTime="2026-02-16 15:10:02.579115421 +0000 UTC m=+996.764092497" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.072789 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.073898 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.158931 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:15 crc kubenswrapper[4705]: I0216 15:10:15.474281 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.159524 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.261078 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.723118 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftls8" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" containerID="cri-o://74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" gracePeriod=2 Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.170987 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.285868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities" (OuterVolumeSpecName: "utilities") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.291655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv" (OuterVolumeSpecName: "kube-api-access-jsmhv") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "kube-api-access-jsmhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.333620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395061 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395133 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395153 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735277 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" exitCode=0 Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735417 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722"} Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735454 4705 scope.go:117] "RemoveContainer" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735484 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.794175 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.796253 4705 scope.go:117] "RemoveContainer" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.805782 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.829337 4705 scope.go:117] "RemoveContainer" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880024 4705 scope.go:117] "RemoveContainer" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.880800 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": container with ID starting with 74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642 not found: ID does not exist" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880844 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} err="failed to get container status \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": rpc error: code = NotFound desc = could not find container \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": container with ID starting with 74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642 not found: ID does not exist" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880874 4705 scope.go:117] "RemoveContainer" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.881246 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": container with ID starting with 866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54 not found: ID does not exist" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881277 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54"} err="failed to get container status \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": rpc error: code = NotFound desc = could not find container \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": container with ID starting with 866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54 not found: ID does not exist" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881295 4705 scope.go:117] "RemoveContainer" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.881582 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": container with ID starting with 12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc not found: ID does not exist" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881606 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc"} err="failed to get container status \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": rpc error: code = NotFound desc = could not find container \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": container with ID starting with 12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc not found: ID does not exist" Feb 16 15:10:18 crc kubenswrapper[4705]: I0216 15:10:18.432075 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" path="/var/lib/kubelet/pods/f177069e-fdb0-44b5-a098-948bbb859bbc/volumes" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.468291 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469423 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-utilities" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469438 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-utilities" Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469450 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-content" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-content" Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469490 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469708 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.470481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.477620 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-x552h" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.479059 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.480651 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.484035 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xmpx2" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.486009 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.492497 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.493710 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.500690 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tpf2v" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.535973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.546430 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567260 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567386 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.587177 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.588489 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.591388 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-26vj4" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.596688 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.598125 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.602454 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sqfcj" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.608611 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.627321 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.628826 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.633565 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qx6x4" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.663462 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671134 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.698980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.699358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.715160 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.716678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.721138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.722257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.724857 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-k7ftx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.737843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.749583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.760473 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.778168 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.796147 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9lpc6" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.812568 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.812696 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.815735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816595 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816655 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816829 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.868611 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.871167 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.878627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.901268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xnsf9" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.903522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.919424 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.928820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.943458 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.944979 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.952280 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.961777 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.968948 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.019520 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.021001 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.026920 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tw72f" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027461 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027615 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.027683 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.027755 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:36.527730721 +0000 UTC m=+1030.712707797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.028980 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.034608 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hlp4w" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.046126 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.056652 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.058110 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.061747 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t6bmm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.062031 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.087997 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.129205 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132418 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132535 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.142197 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.159474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.165658 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.170865 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.175058 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8vjr8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.177094 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.178655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.182675 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ct77r" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234480 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234529 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.243695 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.259151 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.259731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.277486 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.278999 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.279080 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.286296 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vw46g" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.295802 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336265 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336324 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.350043 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.374723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.387854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.391901 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.393606 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.396785 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9wwsz" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.397019 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.408464 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.426153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.480493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.510715 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.546153 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.546236 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.546212704 +0000 UTC m=+1031.731189970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.559519 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560799 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560844 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.561789 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562433 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.565824 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562638 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562703 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.563330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.573672 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-m9bpn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.574028 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kpsnn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.582916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-n2trf" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.596277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.613140 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.642181 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.658207 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.658453 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.665284 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.642408 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.665662 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.165599888 +0000 UTC m=+1031.350576964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.669977 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.673526 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rpg8h" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.674754 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.687521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.687686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.699540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.733979 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.794216 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800547 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.808532 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-v8lz9" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.821325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.823817 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.828268 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.830509 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.908428 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.908908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.926780 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.938586 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.967475 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.979176 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.981096 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.992078 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.000760 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.011187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.014706 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.014965 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b5p6j" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.015252 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.027923 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.054671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.075752 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.077255 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.084714 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.088050 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d2bn5" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.113536 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.113917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.120393 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.145460 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.192444 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237831 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237895 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.238090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.238149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238355 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.23843417 +0000 UTC m=+1032.423411246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238755 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238805 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.73878901 +0000 UTC m=+1031.923766076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238856 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238890 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.738881613 +0000 UTC m=+1031.923858689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.257858 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.294353 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.351156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.412936 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.451902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.561968 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.563709 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.563763 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:39.563746785 +0000 UTC m=+1033.748723861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.768635 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.768870 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.768927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.768960 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.768939326 +0000 UTC m=+1032.953916392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.769774 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.769865 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.769842572 +0000 UTC m=+1032.954819828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.890418 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.923424 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.935084 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.949803 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:37 crc kubenswrapper[4705]: W0216 15:10:37.978789 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34eadd57_e91b_4324_93c0_ede339012ab3.slice/crio-fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529 WatchSource:0}: Error finding container fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529: Status 404 returned error can't find the container with id fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529 Feb 16 15:10:37 crc kubenswrapper[4705]: W0216 15:10:37.980274 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b4e27c_91ff_4540_bfff_e6c30849c75f.slice/crio-6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea WatchSource:0}: Error finding container 6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea: Status 404 returned error can't find the container with id 6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.991496 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" event={"ID":"84edc365-fa2c-40bc-ae0e-b71ae094ab27","Type":"ContainerStarted","Data":"71c77b6249de7bf666267115eee47697de039f8777efddbd412fddb2d4f335e4"} Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.998262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" event={"ID":"f1a4206b-818d-49e7-9177-9dc7373ded1c","Type":"ContainerStarted","Data":"882be1bf4b81d928fb77017cdcb45594b1ef9b78db0197ef17df96be6b44eaf7"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.004293 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.013674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" event={"ID":"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e","Type":"ContainerStarted","Data":"56fb7f87cc952bfe9df4b5094af90d90feff05c4c3e0d26258650fd59ce5e9e1"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.015962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" event={"ID":"1b9942d1-9e1e-436b-8a58-e37d6b55a00b","Type":"ContainerStarted","Data":"84cbeea1b8569314e3a39d19a6c3a81960c05b9ed365d5254499a7b0a3c593d6"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.178894 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.200424 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.262236 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.310827 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.311045 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.311110 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.311093834 +0000 UTC m=+1034.496070910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.543281 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.563146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:38 crc kubenswrapper[4705]: W0216 15:10:38.572333 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8279d837_6ad4_4e2b_a03a_eb0a24a30998.slice/crio-0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c WatchSource:0}: Error finding container 0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c: Status 404 returned error can't find the container with id 0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.604478 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.611248 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:38 crc kubenswrapper[4705]: W0216 15:10:38.662308 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7373be90_eefb_4c2b_bdbd_a312daef2434.slice/crio-5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2 WatchSource:0}: Error finding container 5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2: Status 404 returned error can't find the container with id 5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2 Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.822664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.823143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823336 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823401 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.823383781 +0000 UTC m=+1035.008360857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823776 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823808 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.823800002 +0000 UTC m=+1035.008777078 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.921975 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.948154 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.981459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.010725 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.044770 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.109670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.112315 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" event={"ID":"e73efbc6-26db-4760-b745-3c93c9b2329e","Type":"ContainerStarted","Data":"f6b915c7b7aaeaa24ad5d28f57edad862cae1a3a23b0775e952534bdb6f05ab5"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.118856 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" event={"ID":"a6d65371-bf15-42b9-857d-c4c7350aa402","Type":"ContainerStarted","Data":"07ab2daec4cd1119e94220cf4a6e5648aae2f86209abc64857868e36703902c5"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.166436 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" event={"ID":"9f0ad3cb-ac80-4462-bd97-b09f9367dc54","Type":"ContainerStarted","Data":"c27554cf31fd824856e9c4d0d610a41b7e54d540006b10b19075ac1a6099dcf4"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.169181 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" event={"ID":"f0b4e27c-91ff-4540-bfff-e6c30849c75f","Type":"ContainerStarted","Data":"6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.171625 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bq9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5s9ck_openstack-operators(d67e5221-5cd4-4659-a41b-5d470f435c3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.171916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" event={"ID":"f06e9156-0c7b-41f6-a1cf-83820a7e7732","Type":"ContainerStarted","Data":"21edcf6feeea5a9d0e65e6f05a309d694fd65f17fff3c96f93509357337e456d"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.173032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.175304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" event={"ID":"34eadd57-e91b-4324-93c0-ede339012ab3","Type":"ContainerStarted","Data":"fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529"} Feb 16 15:10:39 crc kubenswrapper[4705]: W0216 15:10:39.201919 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d4c4ad7_542f_4d25_a444_7b4752e43f89.slice/crio-0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878 WatchSource:0}: Error finding container 0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878: Status 404 returned error can't find the container with id 0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878 Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.209262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" event={"ID":"59e2a9a8-5a0d-4772-8d9c-b755fcd234be","Type":"ContainerStarted","Data":"b73872e77f66a39eee1575c5ee3d8f38ac806a620df51b664b46eeeee35e64be"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.210483 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-br499,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6ccb9b958b-qbt7j_openstack-operators(8d4c4ad7-542f-4d25-a444-7b4752e43f89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.211608 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.213393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" event={"ID":"8279d837-6ad4-4e2b-a03a-eb0a24a30998","Type":"ContainerStarted","Data":"0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.215726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" event={"ID":"d4a1c432-7691-472b-80af-caaa6afcacb2","Type":"ContainerStarted","Data":"858d5984e85f61ee4ef173dcf1aad4a8e9d6ebe913b9361fd59cbae5944ddfeb"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.225955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" event={"ID":"7373be90-eefb-4c2b-bdbd-a312daef2434","Type":"ContainerStarted","Data":"5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.661090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.661596 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.661661 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:43.661644241 +0000 UTC m=+1037.846621317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.383462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" event={"ID":"c66cb2ee-a6d3-454b-a2ea-a160038b76f6","Type":"ContainerStarted","Data":"e7d724620ab28912120b1d0e926f4bc8de254b44a90930caeea1b9953e3e8b6c"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.385826 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.386132 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.386212 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.38619104 +0000 UTC m=+1038.571168116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.397568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" event={"ID":"ca67e7ec-20a9-4768-ae37-3aa90f721201","Type":"ContainerStarted","Data":"83f13fa1f1d7b8fd9cdb6a74b177b498a4f2d071f3d06f1410b0b9e8b508fd5b"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.454853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" event={"ID":"d583ac10-9ad2-4f95-9787-74f2cb28c943","Type":"ContainerStarted","Data":"2d0bf6215441a1b8402ca1dd3be8ae24eeeb60ec87a954fcd4f4d59c921b608a"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.473152 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" event={"ID":"8d4c4ad7-542f-4d25-a444-7b4752e43f89","Type":"ContainerStarted","Data":"0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878"} Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.479001 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.494328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" event={"ID":"d67e5221-5cd4-4659-a41b-5d470f435c3e","Type":"ContainerStarted","Data":"09e0d1c5ec7f1a07494f5c8c6a3b29b423b52d17aef5bdf97721f8bf6c65887c"} Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.499799 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.537172 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" event={"ID":"794d8603-8fa6-4068-8a38-e0825d42ae3f","Type":"ContainerStarted","Data":"3f6311770b658b79200ae03dd84f08003c81190aa7de83d04d5ef3927e2992f8"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.897466 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.897756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897764 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897867 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.897841657 +0000 UTC m=+1039.082818733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897930 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.898004 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.897984821 +0000 UTC m=+1039.082961897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:41 crc kubenswrapper[4705]: E0216 15:10:41.560220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:41 crc kubenswrapper[4705]: E0216 15:10:41.561555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:43 crc kubenswrapper[4705]: I0216 15:10:43.676225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:43 crc kubenswrapper[4705]: E0216 15:10:43.676444 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:43 crc kubenswrapper[4705]: E0216 15:10:43.676565 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:51.676543631 +0000 UTC m=+1045.861520707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.394228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.394919 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.395033 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.395007815 +0000 UTC m=+1046.579984891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.908709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.908821 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909016 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909077 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.909058273 +0000 UTC m=+1047.094035349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909461 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909500 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.909490265 +0000 UTC m=+1047.094467341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:51 crc kubenswrapper[4705]: I0216 15:10:51.756183 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:51 crc kubenswrapper[4705]: I0216 15:10:51.766191 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.958444 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.958718 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqzh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-ftdcn_openstack-operators(a6d65371-bf15-42b9-857d-c4c7350aa402): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.960511 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podUID="a6d65371-bf15-42b9-857d-c4c7350aa402" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.026696 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.472263 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.482213 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.674820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.681100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podUID="a6d65371-bf15-42b9-857d-c4c7350aa402" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.853619 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.853900 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-f52r7_openstack-operators(1b9942d1-9e1e-436b-8a58-e37d6b55a00b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.855017 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podUID="1b9942d1-9e1e-436b-8a58-e37d6b55a00b" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.983882 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.984032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.990462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: I0216 15:10:53.000154 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: I0216 15:10:53.214387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.694839 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podUID="1b9942d1-9e1e-436b-8a58-e37d6b55a00b" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.731286 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.731587 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjkq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-77d2l_openstack-operators(d583ac10-9ad2-4f95-9787-74f2cb28c943): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.732856 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podUID="d583ac10-9ad2-4f95-9787-74f2cb28c943" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.360961 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.361268 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8q5l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-kh759_openstack-operators(e73efbc6-26db-4760-b745-3c93c9b2329e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.363362 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podUID="e73efbc6-26db-4760-b745-3c93c9b2329e" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.702886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podUID="e73efbc6-26db-4760-b745-3c93c9b2329e" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.702897 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podUID="d583ac10-9ad2-4f95-9787-74f2cb28c943" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.946653 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.947434 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvxl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-dnbpd_openstack-operators(f06e9156-0c7b-41f6-a1cf-83820a7e7732): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.948941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podUID="f06e9156-0c7b-41f6-a1cf-83820a7e7732" Feb 16 15:10:57 crc kubenswrapper[4705]: E0216 15:10:57.727103 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podUID="f06e9156-0c7b-41f6-a1cf-83820a7e7732" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.851782 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.852549 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-q5n45_openstack-operators(f1a4206b-818d-49e7-9177-9dc7373ded1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.853764 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podUID="f1a4206b-818d-49e7-9177-9dc7373ded1c" Feb 16 15:10:59 crc kubenswrapper[4705]: E0216 15:10:59.747277 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podUID="f1a4206b-818d-49e7-9177-9dc7373ded1c" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.803673 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.804676 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9p5hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-bk9rm_openstack-operators(c66cb2ee-a6d3-454b-a2ea-a160038b76f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.806026 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podUID="c66cb2ee-a6d3-454b-a2ea-a160038b76f6" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.461265 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.461628 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nh952,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-hw64s_openstack-operators(d4a1c432-7691-472b-80af-caaa6afcacb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.462871 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podUID="d4a1c432-7691-472b-80af-caaa6afcacb2" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.766125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podUID="c66cb2ee-a6d3-454b-a2ea-a160038b76f6" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.767670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podUID="d4a1c432-7691-472b-80af-caaa6afcacb2" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.165007 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.165251 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pbk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6c6fr_openstack-operators(ca67e7ec-20a9-4768-ae37-3aa90f721201): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.166512 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podUID="ca67e7ec-20a9-4768-ae37-3aa90f721201" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.781455 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podUID="ca67e7ec-20a9-4768-ae37-3aa90f721201" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.214436 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.215039 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnmxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-zk57l_openstack-operators(7373be90-eefb-4c2b-bdbd-a312daef2434): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.216392 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podUID="7373be90-eefb-4c2b-bdbd-a312daef2434" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.796441 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podUID="7373be90-eefb-4c2b-bdbd-a312daef2434" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.553420 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.553948 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jw5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-xdlbv_openstack-operators(59e2a9a8-5a0d-4772-8d9c-b755fcd234be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.556544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podUID="59e2a9a8-5a0d-4772-8d9c-b755fcd234be" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.815713 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podUID="59e2a9a8-5a0d-4772-8d9c-b755fcd234be" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.189022 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.189253 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84pqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-8lztr_openstack-operators(34eadd57-e91b-4324-93c0-ede339012ab3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.191339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podUID="34eadd57-e91b-4324-93c0-ede339012ab3" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.824420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podUID="34eadd57-e91b-4324-93c0-ede339012ab3" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.753172 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.753745 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnt9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-b6587_openstack-operators(8279d837-6ad4-4e2b-a03a-eb0a24a30998): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.755213 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podUID="8279d837-6ad4-4e2b-a03a-eb0a24a30998" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.880550 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podUID="8279d837-6ad4-4e2b-a03a-eb0a24a30998" Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.212734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.884094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" event={"ID":"9bd1689a-ae93-4ac0-ab21-c899756ef07a","Type":"ContainerStarted","Data":"02a33bc9560ba627451b465a76120b11857961a8c985b83240446e9db08c2627"} Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.922324 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.986131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:11:10 crc kubenswrapper[4705]: W0216 15:11:10.183038 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1872b592_a1cc_445a_b75f_f658612dc160.slice/crio-edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658 WatchSource:0}: Error finding container edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658: Status 404 returned error can't find the container with id edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658 Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.912722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" event={"ID":"07891331-9fdb-4922-aea1-6a3acf7f656f","Type":"ContainerStarted","Data":"c0388a91e8104ecd452db96ed97457e8f6ad6c3149150248281ab915a1bf221e"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.913149 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" event={"ID":"07891331-9fdb-4922-aea1-6a3acf7f656f","Type":"ContainerStarted","Data":"00a332e1694035de770f854f75759759e0a7a681a9785f2d2412ef442f9a34d9"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.914479 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.923440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" event={"ID":"f0b4e27c-91ff-4540-bfff-e6c30849c75f","Type":"ContainerStarted","Data":"3b6ec758ca3e96a2800ff59221eb969d8073fea14bc66f751cb0b8ee1d67966d"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.924356 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.936674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" event={"ID":"1872b592-a1cc-445a-b75f-f658612dc160","Type":"ContainerStarted","Data":"edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.953651 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" event={"ID":"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e","Type":"ContainerStarted","Data":"886d09b73747919bed7e7c1cc82c961d6bff011bd64be69bc95e204af2e2fa7c"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.954864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.956345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" event={"ID":"d583ac10-9ad2-4f95-9787-74f2cb28c943","Type":"ContainerStarted","Data":"30e901058a65ca78e4b2071132f2ea5301f7898067e2263382868ce7f7573bec"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.957498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.967728 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" event={"ID":"1b9942d1-9e1e-436b-8a58-e37d6b55a00b","Type":"ContainerStarted","Data":"70a51012dbb0f26f2386d4d9f843820be6b6a8980664fa15a57df1704dbc6cfb"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.968489 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.982431 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" event={"ID":"e73efbc6-26db-4760-b745-3c93c9b2329e","Type":"ContainerStarted","Data":"e6d775580a1ff4966c5f8b78051c26adcb74e5b0844d99c4634f3d29852170ea"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.983508 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.984738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" event={"ID":"84edc365-fa2c-40bc-ae0e-b71ae094ab27","Type":"ContainerStarted","Data":"c273eba925bfb5987af04b3e7438808c96b1ca182bf3e54ec9fd7621601fe915"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.985175 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.986929 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" event={"ID":"794d8603-8fa6-4068-8a38-e0825d42ae3f","Type":"ContainerStarted","Data":"04f520d38cd740f487a4ee0f874f679eb7e666034e72c8d4fec754fd2a85b0ca"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.987149 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.990896 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" podStartSLOduration=34.990879415 podStartE2EDuration="34.990879415s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:10.974459182 +0000 UTC m=+1065.159436258" watchObservedRunningTime="2026-02-16 15:11:10.990879415 +0000 UTC m=+1065.175856491" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.010078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" event={"ID":"9f0ad3cb-ac80-4462-bd97-b09f9367dc54","Type":"ContainerStarted","Data":"99eb9a85eef51d842fc7c7af7df01eea7d9cfa79a658b4a6af9be0dd230d248d"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.011028 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.014469 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" podStartSLOduration=5.768179625 podStartE2EDuration="36.014455979s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.905577232 +0000 UTC m=+1032.090554308" lastFinishedPulling="2026-02-16 15:11:08.151853586 +0000 UTC m=+1062.336830662" observedRunningTime="2026-02-16 15:11:11.009763087 +0000 UTC m=+1065.194740163" watchObservedRunningTime="2026-02-16 15:11:11.014455979 +0000 UTC m=+1065.199433055" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.031647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" event={"ID":"8d4c4ad7-542f-4d25-a444-7b4752e43f89","Type":"ContainerStarted","Data":"a37c138621a40bad4a022cf4aec5313c8a095e1a3ccd124227862dcd9fb4212b"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.032822 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.044212 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podStartSLOduration=3.668708489 podStartE2EDuration="36.044187626s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.918743177 +0000 UTC m=+1032.103720253" lastFinishedPulling="2026-02-16 15:11:10.294222314 +0000 UTC m=+1064.479199390" observedRunningTime="2026-02-16 15:11:11.035696867 +0000 UTC m=+1065.220673943" watchObservedRunningTime="2026-02-16 15:11:11.044187626 +0000 UTC m=+1065.229164702" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.046646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" event={"ID":"d67e5221-5cd4-4659-a41b-5d470f435c3e","Type":"ContainerStarted","Data":"802b5d982eb2a2824d8a315a61f754dc128cf0d902f081586255ed15685f8e02"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.055712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" event={"ID":"a6d65371-bf15-42b9-857d-c4c7350aa402","Type":"ContainerStarted","Data":"a28f73b409f77661d439c8f4462c43c745659cc60cfb24b10b12f6d93b752170"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.056700 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.066660 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" podStartSLOduration=5.905581495 podStartE2EDuration="36.066636859s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.990764511 +0000 UTC m=+1032.175741587" lastFinishedPulling="2026-02-16 15:11:08.151819855 +0000 UTC m=+1062.336796951" observedRunningTime="2026-02-16 15:11:11.062709398 +0000 UTC m=+1065.247686464" watchObservedRunningTime="2026-02-16 15:11:11.066636859 +0000 UTC m=+1065.251613935" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.124926 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podStartSLOduration=3.995226665 podStartE2EDuration="35.124896119s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.111099464 +0000 UTC m=+1033.296076540" lastFinishedPulling="2026-02-16 15:11:10.240768918 +0000 UTC m=+1064.425745994" observedRunningTime="2026-02-16 15:11:11.109758473 +0000 UTC m=+1065.294735549" watchObservedRunningTime="2026-02-16 15:11:11.124896119 +0000 UTC m=+1065.309873195" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.221697 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podStartSLOduration=4.260185208 podStartE2EDuration="36.221665045s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.233341987 +0000 UTC m=+1032.418319063" lastFinishedPulling="2026-02-16 15:11:10.194821814 +0000 UTC m=+1064.379798900" observedRunningTime="2026-02-16 15:11:11.200653383 +0000 UTC m=+1065.385630459" watchObservedRunningTime="2026-02-16 15:11:11.221665045 +0000 UTC m=+1065.406642121" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.273887 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" podStartSLOduration=7.230830559 podStartE2EDuration="36.273855375s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.110784455 +0000 UTC m=+1033.295761531" lastFinishedPulling="2026-02-16 15:11:08.153809261 +0000 UTC m=+1062.338786347" observedRunningTime="2026-02-16 15:11:11.246513705 +0000 UTC m=+1065.431490781" watchObservedRunningTime="2026-02-16 15:11:11.273855375 +0000 UTC m=+1065.458832451" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.339394 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" podStartSLOduration=5.144732827 podStartE2EDuration="36.33935049s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:36.958488078 +0000 UTC m=+1031.143465164" lastFinishedPulling="2026-02-16 15:11:08.153105751 +0000 UTC m=+1062.338082827" observedRunningTime="2026-02-16 15:11:11.325655754 +0000 UTC m=+1065.510632830" watchObservedRunningTime="2026-02-16 15:11:11.33935049 +0000 UTC m=+1065.524327566" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.456533 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podStartSLOduration=4.456224659 podStartE2EDuration="36.456489679s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.20435002 +0000 UTC m=+1032.389327096" lastFinishedPulling="2026-02-16 15:11:10.20461504 +0000 UTC m=+1064.389592116" observedRunningTime="2026-02-16 15:11:11.440782426 +0000 UTC m=+1065.625759502" watchObservedRunningTime="2026-02-16 15:11:11.456489679 +0000 UTC m=+1065.641466745" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.496982 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podStartSLOduration=5.427758501 podStartE2EDuration="36.491402582s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.210300792 +0000 UTC m=+1033.395277868" lastFinishedPulling="2026-02-16 15:11:10.273944873 +0000 UTC m=+1064.458921949" observedRunningTime="2026-02-16 15:11:11.48776839 +0000 UTC m=+1065.672745466" watchObservedRunningTime="2026-02-16 15:11:11.491402582 +0000 UTC m=+1065.676379658" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.543252 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podStartSLOduration=4.420684559 podStartE2EDuration="35.543233172s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.171500406 +0000 UTC m=+1033.356477482" lastFinishedPulling="2026-02-16 15:11:10.294049009 +0000 UTC m=+1064.479026095" observedRunningTime="2026-02-16 15:11:11.539936069 +0000 UTC m=+1065.724913155" watchObservedRunningTime="2026-02-16 15:11:11.543233172 +0000 UTC m=+1065.728210248" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.592263 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" podStartSLOduration=7.092199475 podStartE2EDuration="36.592246072s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.653332562 +0000 UTC m=+1032.838309638" lastFinishedPulling="2026-02-16 15:11:08.153379159 +0000 UTC m=+1062.338356235" observedRunningTime="2026-02-16 15:11:11.590701479 +0000 UTC m=+1065.775678565" watchObservedRunningTime="2026-02-16 15:11:11.592246072 +0000 UTC m=+1065.777223148" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.086104 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" event={"ID":"f1a4206b-818d-49e7-9177-9dc7373ded1c","Type":"ContainerStarted","Data":"5d57fcc57a5792fb93ce1f1f6a3dd54a202d2e83574ff7d0f17bcb3eec786412"} Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.087087 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.094987 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" event={"ID":"f06e9156-0c7b-41f6-a1cf-83820a7e7732","Type":"ContainerStarted","Data":"88c0ce3a4dee1d6fdc271f499cdeb940241dabfca5aa9a0d8fcd431f503ecd19"} Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.095756 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.112438 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podStartSLOduration=3.143099187 podStartE2EDuration="39.112410861s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.147014383 +0000 UTC m=+1031.331991459" lastFinishedPulling="2026-02-16 15:11:13.116326057 +0000 UTC m=+1067.301303133" observedRunningTime="2026-02-16 15:11:14.106873785 +0000 UTC m=+1068.291850871" watchObservedRunningTime="2026-02-16 15:11:14.112410861 +0000 UTC m=+1068.297387937" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.136757 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podStartSLOduration=4.416593823 podStartE2EDuration="39.136727376s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.290662611 +0000 UTC m=+1032.475639677" lastFinishedPulling="2026-02-16 15:11:13.010796154 +0000 UTC m=+1067.195773230" observedRunningTime="2026-02-16 15:11:14.128937987 +0000 UTC m=+1068.313915103" watchObservedRunningTime="2026-02-16 15:11:14.136727376 +0000 UTC m=+1068.321704492" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.819910 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.820270 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.825085 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.960042 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.116241 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" event={"ID":"c66cb2ee-a6d3-454b-a2ea-a160038b76f6","Type":"ContainerStarted","Data":"c64349a54a7e60292c5ea466997d0709f7abb50d08b9910714aefd138c7e4c4a"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.117770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.119752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" event={"ID":"9bd1689a-ae93-4ac0-ab21-c899756ef07a","Type":"ContainerStarted","Data":"baecbcc39cb32374016576b48d7c2e30efbf65d1ca3d0699c79b79ee7b705069"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.119865 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.122104 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" event={"ID":"1872b592-a1cc-445a-b75f-f658612dc160","Type":"ContainerStarted","Data":"5a1a32e1f569f196520b32cb3315cc745de1f9db08d98119108bc01428cc9407"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.122299 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.144977 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podStartSLOduration=4.603596258 podStartE2EDuration="41.144945707s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.088322485 +0000 UTC m=+1033.273299561" lastFinishedPulling="2026-02-16 15:11:15.629671934 +0000 UTC m=+1069.814649010" observedRunningTime="2026-02-16 15:11:16.143249069 +0000 UTC m=+1070.328226145" watchObservedRunningTime="2026-02-16 15:11:16.144945707 +0000 UTC m=+1070.329922783" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.146357 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.209682 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" podStartSLOduration=35.743891478 podStartE2EDuration="41.209650459s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:11:10.194153775 +0000 UTC m=+1064.379130851" lastFinishedPulling="2026-02-16 15:11:15.659912746 +0000 UTC m=+1069.844889832" observedRunningTime="2026-02-16 15:11:16.181917628 +0000 UTC m=+1070.366894704" watchObservedRunningTime="2026-02-16 15:11:16.209650459 +0000 UTC m=+1070.394627535" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.212700 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" podStartSLOduration=35.029828827 podStartE2EDuration="41.212673934s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:11:09.477071079 +0000 UTC m=+1063.662048165" lastFinishedPulling="2026-02-16 15:11:15.659916196 +0000 UTC m=+1069.844893272" observedRunningTime="2026-02-16 15:11:16.20256636 +0000 UTC m=+1070.387543436" watchObservedRunningTime="2026-02-16 15:11:16.212673934 +0000 UTC m=+1070.397651010" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.413084 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.451990 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.931803 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.947357 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.145834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" event={"ID":"d4a1c432-7691-472b-80af-caaa6afcacb2","Type":"ContainerStarted","Data":"bfc3f9ca887b472519251656402b6ecd440d6adbbcc6a32960895a97fb04f49b"} Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.147040 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.169015 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podStartSLOduration=4.748918575 podStartE2EDuration="42.168993818s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.559037463 +0000 UTC m=+1032.744014529" lastFinishedPulling="2026-02-16 15:11:15.979112696 +0000 UTC m=+1070.164089772" observedRunningTime="2026-02-16 15:11:17.16692012 +0000 UTC m=+1071.351897206" watchObservedRunningTime="2026-02-16 15:11:17.168993818 +0000 UTC m=+1071.353970894" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.266182 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.158272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" event={"ID":"7373be90-eefb-4c2b-bdbd-a312daef2434","Type":"ContainerStarted","Data":"27e6eedaccb9ab708cd6338f682159b8d96abdbcbfe78114130d44004c17b8cd"} Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.158892 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.161208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" event={"ID":"ca67e7ec-20a9-4768-ae37-3aa90f721201","Type":"ContainerStarted","Data":"12c8f52b838f5d0ee99eca55dbae3b7837c74ef9fe6bfb7f995ca068ba68cdbb"} Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.182521 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podStartSLOduration=4.71935496 podStartE2EDuration="43.182498113s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.665940461 +0000 UTC m=+1032.850917537" lastFinishedPulling="2026-02-16 15:11:17.129083614 +0000 UTC m=+1071.314060690" observedRunningTime="2026-02-16 15:11:18.180679122 +0000 UTC m=+1072.365656208" watchObservedRunningTime="2026-02-16 15:11:18.182498113 +0000 UTC m=+1072.367475189" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.206582 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podStartSLOduration=5.157662393 podStartE2EDuration="43.206555871s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.087934564 +0000 UTC m=+1033.272911640" lastFinishedPulling="2026-02-16 15:11:17.136828032 +0000 UTC m=+1071.321805118" observedRunningTime="2026-02-16 15:11:18.202163517 +0000 UTC m=+1072.387140603" watchObservedRunningTime="2026-02-16 15:11:18.206555871 +0000 UTC m=+1072.391532957" Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.171273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" event={"ID":"59e2a9a8-5a0d-4772-8d9c-b755fcd234be","Type":"ContainerStarted","Data":"d3adc4667521059be5b629406c458a39d7d58140309107d790c3bc419ea0fd6c"} Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.172054 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.192299 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podStartSLOduration=3.320353179 podStartE2EDuration="44.192277423s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.03703756 +0000 UTC m=+1032.222014636" lastFinishedPulling="2026-02-16 15:11:18.908961764 +0000 UTC m=+1073.093938880" observedRunningTime="2026-02-16 15:11:19.185133052 +0000 UTC m=+1073.370110138" watchObservedRunningTime="2026-02-16 15:11:19.192277423 +0000 UTC m=+1073.377254499" Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.199304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" event={"ID":"34eadd57-e91b-4324-93c0-ede339012ab3","Type":"ContainerStarted","Data":"d28cbfcacecf469f1cfa8d86454fb022e4204df868a129d8fe15a64f9744de37"} Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.200531 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.220314 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podStartSLOduration=4.14332459 podStartE2EDuration="46.22027088s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.013633323 +0000 UTC m=+1032.198610399" lastFinishedPulling="2026-02-16 15:11:20.090579613 +0000 UTC m=+1074.275556689" observedRunningTime="2026-02-16 15:11:21.216413761 +0000 UTC m=+1075.401390837" watchObservedRunningTime="2026-02-16 15:11:21.22027088 +0000 UTC m=+1075.405247956" Feb 16 15:11:22 crc kubenswrapper[4705]: I0216 15:11:22.037316 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:11:22 crc kubenswrapper[4705]: I0216 15:11:22.682435 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.223533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" event={"ID":"8279d837-6ad4-4e2b-a03a-eb0a24a30998","Type":"ContainerStarted","Data":"03b068295e0654ebb37c19c31b73d4a8886a8926c28d589fc7c38ed730fafa87"} Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.224148 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.224331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.247663 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podStartSLOduration=3.988081015 podStartE2EDuration="48.24762078s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.581158164 +0000 UTC m=+1032.766135240" lastFinishedPulling="2026-02-16 15:11:22.840697889 +0000 UTC m=+1077.025675005" observedRunningTime="2026-02-16 15:11:23.241053355 +0000 UTC m=+1077.426030451" watchObservedRunningTime="2026-02-16 15:11:23.24762078 +0000 UTC m=+1077.432597886" Feb 16 15:11:25 crc kubenswrapper[4705]: I0216 15:11:25.932030 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:11:25 crc kubenswrapper[4705]: I0216 15:11:25.973508 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.300456 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.394326 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.616323 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.678893 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.992903 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.996835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:11:27 crc kubenswrapper[4705]: I0216 15:11:27.011202 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:11:31 crc kubenswrapper[4705]: I0216 15:11:31.684326 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:11:31 crc kubenswrapper[4705]: I0216 15:11:31.685637 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:11:36 crc kubenswrapper[4705]: I0216 15:11:36.601514 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.918173 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.925197 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.931091 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932039 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932336 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qrmjt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932862 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.933085 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.945985 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.946788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.946894 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.948838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.956941 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.971252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049159 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049397 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.050458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.071101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152150 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152625 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.153530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.153539 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.172942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.270855 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.279804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.765178 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.851758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:59 crc kubenswrapper[4705]: W0216 15:11:59.855645 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ebe5f1b_1a13_4172_8662_aeae2c43ade1.slice/crio-94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a WatchSource:0}: Error finding container 94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a: Status 404 returned error can't find the container with id 94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a Feb 16 15:12:00 crc kubenswrapper[4705]: I0216 15:12:00.607094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" event={"ID":"6ebe5f1b-1a13-4172-8662-aeae2c43ade1","Type":"ContainerStarted","Data":"94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a"} Feb 16 15:12:00 crc kubenswrapper[4705]: I0216 15:12:00.608732 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" event={"ID":"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27","Type":"ContainerStarted","Data":"0d5732ad1582d0dc0f1a09eb172ef4f895ed8673cbf8cf85d9d7eaad2e583287"} Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.685854 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.686242 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.751778 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.783852 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.785448 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.824681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988147 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988230 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.091896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.091941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.148360 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.158946 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.192000 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.210351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.287532 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.409426 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450736 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.451942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.452206 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.494255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.546468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.978167 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.989298 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.017648 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.017697 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-st4tw" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025165 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025505 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025929 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025979 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.029652 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.053505 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.066556 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.071738 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.087342 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.091169 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.104713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105008 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105124 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105420 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105449 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.119780 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.133755 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.142734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209195 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209616 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.210080 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211215 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211545 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211619 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211640 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211752 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212048 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212133 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212229 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212515 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212584 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.213856 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.213919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.214269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.215521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.220758 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.221180 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.221212 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6913a5af6e0b901f5e41cc9da5820d3446361504ddf8a58e3143477836427e51/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.222414 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.222714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.243188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.244313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.260309 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.296897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.316798 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318160 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318294 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318352 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318594 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318790 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.319354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.320672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.321450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.327013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329537 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329875 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.331007 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.331192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.332225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.332463 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.334000 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.334879 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.337509 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.337622 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e04bcb153e3e04f037e1fc841d6f137a96f2052e5c7d3319ec9bf09db685a60/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.339076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.343687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.348397 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.354429 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359299 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359514 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75a91b98174d7040097f89a93bfd5946d971fbacf68f20932d87234b8e73eca0/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.360016 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.361910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.363945 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364406 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jzl8w" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364745 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364890 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.365078 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.365217 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.366536 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.390941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.395911 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.398505 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423746 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423841 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423972 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424014 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424304 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.494641 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533894 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533934 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.534050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536408 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536606 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536751 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.537189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.539913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.541241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.541534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.549772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.550682 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.554284 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.566972 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.567038 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15fddb9283d0361ec376f6d3697b3a7dae141e971c813fd76f875f1c98aad2dc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.571082 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.571905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.575775 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.632398 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.698229 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.729236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.729861 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.743474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.751237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerStarted","Data":"483f41b8e768070c0e3971042788df02650602d14770eb6fc300e60a9f3c1c36"} Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.762594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" event={"ID":"d7dbc743-b65f-414c-adef-c3e8e158e4dc","Type":"ContainerStarted","Data":"cba1b72db61c105e5863e586d645a2f7e94a83ed46db96da197a374840b783e3"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.089009 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.536493 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.539049 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bxd9j" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551553 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.552660 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.561814 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.579596 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.603957 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693662 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693779 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693814 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693880 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.781510 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.788633 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"c10aeda896c97ab2b56b22cb8e034aaa58126bfac49a954b06a32ef9f4316ccc"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.794116 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"ad93a17a230e0f89ffb728c848e626d65cc868f03d8c72f03802d0c82854159a"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795862 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795908 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796149 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796685 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.799428 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.800136 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.801303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.812087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.819929 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.842980 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.855210 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.855262 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b84ac327ec17a2e5247227ffa0b0ce2e626f629e87314080a000575c7f56c493/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.862363 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.032242 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.196743 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.843304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"9536c4826f2994651344a9956c3c00d2cb404777160d90908e2937cd52e8fb5f"} Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.848657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"ba74fdfcb7efec48976e7232011d375059db8616337cd4b51be00bbb131415c9"} Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.110557 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.121784 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.126412 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.126776 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.127123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pg6t9" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.127365 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.171449 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.210664 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.218666 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.221280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7z2kg" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.221569 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.226810 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.242808 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251388 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.272113 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356759 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356792 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356824 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356938 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358046 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356969 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.359013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.360804 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.361270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.366253 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.371270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.372214 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.385777 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.385869 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6751ac2a32a11bd99c4c7a4a92851db593f531ecbf0ccd549987b595b7d4796d/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.391509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: W0216 15:12:06.403466 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50502923_5ef9_46a9_a23d_abe8face6040.slice/crio-a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d WatchSource:0}: Error finding container a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d: Status 404 returned error can't find the container with id a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.454259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482873 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.483035 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.484233 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.484501 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.485118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.515209 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.520326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.624285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.849893 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.958762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d"} Feb 16 15:12:07 crc kubenswrapper[4705]: I0216 15:12:07.303163 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:07 crc kubenswrapper[4705]: W0216 15:12:07.354531 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616bbda0_7abf_4cfb_b7f8_f8cca8fb5eab.slice/crio-e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67 WatchSource:0}: Error finding container e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67: Status 404 returned error can't find the container with id e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67 Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:07.992742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67"} Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.582172 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.583777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.602517 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-p4v2d" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.630790 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.722337 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.834942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.898595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.941225 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.780063 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.883359 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.885285 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.889240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-6zgbs" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.890940 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.917219 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.979770 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.979967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: E0216 15:12:10.089247 4705 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 16 15:12:10 crc kubenswrapper[4705]: E0216 15:12:10.089303 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert podName:72697fcc-cd94-4ba9-9479-cb5bd82d83ab nodeName:}" failed. No retries permitted until 2026-02-16 15:12:10.589283962 +0000 UTC m=+1124.774261038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert") pod "observability-ui-dashboards-66cbf594b5-9hcns" (UID: "72697fcc-cd94-4ba9-9479-cb5bd82d83ab") : secret "observability-ui-dashboards" not found Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089354 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.158194 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.220833 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.236564 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.245864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.246248 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bs5tf" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248706 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248778 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248857 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.254685 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.266882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298190 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298246 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298277 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298320 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298374 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298410 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298524 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.299403 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.363465 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.365009 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.414837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.414970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415505 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417076 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.418622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.419211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.426212 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.436107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.450347 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.456190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.460860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.467456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.468386 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.495142 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.522935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523519 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524016 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.525549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.529708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.535567 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.535611 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/88c6cd7cb604a645ab31c0e76d113b8c44ff69d3e39fcb5b354218108db12562/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.536404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.549433 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.564650 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.565116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.628344 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.635557 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.690931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.724375 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.832007 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.879779 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.276033 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.277914 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290283 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6f7th" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290661 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.311678 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369826 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.484226 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.496032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.496314 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.498709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.503000 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.504345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.504645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.546624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.557340 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.564801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.570137 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.634823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635131 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.641512 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.672769 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743315 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743395 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743739 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.744027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.744149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.745818 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.795649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.959180 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.285952 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.289101 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.293069 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8kb4p" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.293404 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.295221 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.296175 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.296261 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.324802 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367441 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367528 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.368292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.470890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471087 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.472204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.472674 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.473448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.479868 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.479916 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a99574cc9e6913add35f0972791bc48bd808b6223c56c5c3ef1a6b5805e6404/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.490334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.492188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.492870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.511019 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.564985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.623815 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.823098 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.836312 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.838360 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.839277 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840324 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8c2hn" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840935 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987104 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987436 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987502 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987525 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.089948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090150 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.091570 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.091695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.092531 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.099434 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.099479 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47827cdad2e80c3b2c570dce059979f5d8271785a0514c2276ab7f5ef7b1b052/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.102296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.104092 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.106600 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.107507 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.150860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.171283 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:18 crc kubenswrapper[4705]: W0216 15:12:18.799979 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb14762a_eebd_41a0_b107_e879fedc05f1.slice/crio-49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a WatchSource:0}: Error finding container 49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a: Status 404 returned error can't find the container with id 49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a Feb 16 15:12:18 crc kubenswrapper[4705]: W0216 15:12:18.813739 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fcf9e_1bc7_4b0c_aa83_b4d5daafbcf0.slice/crio-75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955 WatchSource:0}: Error finding container 75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955: Status 404 returned error can't find the container with id 75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955 Feb 16 15:12:19 crc kubenswrapper[4705]: I0216 15:12:19.310748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerStarted","Data":"75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955"} Feb 16 15:12:19 crc kubenswrapper[4705]: I0216 15:12:19.314046 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"db14762a-eebd-41a0-b107-e879fedc05f1","Type":"ContainerStarted","Data":"49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a"} Feb 16 15:12:23 crc kubenswrapper[4705]: I0216 15:12:23.354301 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.351097 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.351732 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrknb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(f6b410b5-951c-43d2-b846-3fef02ec0f7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.353832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.355759 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.356141 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfwxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(070373d6-b0bd-43e2-bdf5-ca300875e65d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.357453 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.412352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.416672 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.468105 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.471578 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pd25j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3ba19f15-a399-4d4b-bf32-a2a870a660e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.478776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" Feb 16 15:12:26 crc kubenswrapper[4705]: I0216 15:12:26.914766 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:27 crc kubenswrapper[4705]: E0216 15:12:27.408466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.684419 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.685272 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.685334 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.686420 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.686485 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" gracePeriod=600 Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474452 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" exitCode=0 Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474510 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474553 4705 scope.go:117] "RemoveContainer" containerID="edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.582489 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.583247 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n85hb6h67fh66bh689h675h85hc5h5b9hd5h5f9hd4h587h88h8fhdfh8hd6h84h85h65ch59bh5cdh5b4h65h76h54bh9bh75h7h9fhd5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nflbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(db14762a-eebd-41a0-b107-e879fedc05f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.584415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="db14762a-eebd-41a0-b107-e879fedc05f1" Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.144284 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:33 crc kubenswrapper[4705]: W0216 15:12:33.484231 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e71500_a592_4c97_86c1_4f3f6a4d1b41.slice/crio-6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df WatchSource:0}: Error finding container 6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df: Status 404 returned error can't find the container with id 6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.492990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"7015bf43b24ac0edcfb8e9b5ae06dfd4fb6a2c4ed1f37ccbce3950e4c8eb9b1c"} Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.494717 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7cd49558-h4srk" event={"ID":"eeed7723-4cdc-478c-870c-d0e7df3c5673","Type":"ContainerStarted","Data":"e761664f181fbe94235d0ac25e4c497d165a77851c154874a4ee2e27379ca601"} Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.506663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="db14762a-eebd-41a0-b107-e879fedc05f1" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.529621 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.529849 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76xwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-22j4x_openstack(3486f2d2-e6a5-44a0-b804-12f9b9fd6a27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.531120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" podUID="3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.541324 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.541532 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bms9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zdn4j_openstack(d7dbc743-b65f-414c-adef-c3e8e158e4dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.543989 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.628662 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.628894 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s7td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-crh45_openstack(2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.630503 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.650342 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.650635 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gvv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-b59zw_openstack(6ebe5f1b-1a13-4172-8662-aeae2c43ade1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.651781 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" podUID="6ebe5f1b-1a13-4172-8662-aeae2c43ade1" Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.162238 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:34 crc kubenswrapper[4705]: W0216 15:12:34.263236 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe538ffa_cfea_445d_872f_1a0a68b77a50.slice/crio-48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77 WatchSource:0}: Error finding container 48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77: Status 404 returned error can't find the container with id 48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77 Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.366325 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.380933 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.393860 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400348 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400433 4705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400573 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdfvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.402281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" Feb 16 15:12:34 crc kubenswrapper[4705]: W0216 15:12:34.407146 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod761a74d6_061c_47dd_b376_b6d6a1906382.slice/crio-0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf WatchSource:0}: Error finding container 0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf: Status 404 returned error can't find the container with id 0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.509266 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.511413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7cd49558-h4srk" event={"ID":"eeed7723-4cdc-478c-870c-d0e7df3c5673","Type":"ContainerStarted","Data":"570fe5e0a26564b86c0dafdca4bd08aba1d9fcfe2a696bf6e121665f1ee5c74c"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.514342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8" event={"ID":"4374b7db-8c42-42e1-b2bd-c633bdd8edfd","Type":"ContainerStarted","Data":"f9058dd413ac7cfea3831d6df5667fadd3a7fa700e156492cd8034af807a3b42"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.520434 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" event={"ID":"72697fcc-cd94-4ba9-9479-cb5bd82d83ab","Type":"ContainerStarted","Data":"f3c882eb84d2e76a027b499c55e561e337efbeaf523fa716e554d4897f609379"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.521770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.524029 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf"} Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.528647 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.529039 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.529121 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.551117 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b7cd49558-h4srk" podStartSLOduration=24.551091147 podStartE2EDuration="24.551091147s" podCreationTimestamp="2026-02-16 15:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:34.529794598 +0000 UTC m=+1148.714771724" watchObservedRunningTime="2026-02-16 15:12:34.551091147 +0000 UTC m=+1148.736068223" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.121256 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.140977 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243464 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243507 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243786 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243815 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.247545 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config" (OuterVolumeSpecName: "config") pod "6ebe5f1b-1a13-4172-8662-aeae2c43ade1" (UID: "6ebe5f1b-1a13-4172-8662-aeae2c43ade1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.248108 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config" (OuterVolumeSpecName: "config") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq" (OuterVolumeSpecName: "kube-api-access-76xwq") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "kube-api-access-76xwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4" (OuterVolumeSpecName: "kube-api-access-4gvv4") pod "6ebe5f1b-1a13-4172-8662-aeae2c43ade1" (UID: "6ebe5f1b-1a13-4172-8662-aeae2c43ade1"). InnerVolumeSpecName "kube-api-access-4gvv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255902 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346488 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346527 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346539 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346555 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346570 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.548784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" event={"ID":"6ebe5f1b-1a13-4172-8662-aeae2c43ade1","Type":"ContainerDied","Data":"94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.548917 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.566668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.582864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.628597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.643934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" event={"ID":"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27","Type":"ContainerDied","Data":"0d5732ad1582d0dc0f1a09eb172ef4f895ed8673cbf8cf85d9d7eaad2e583287"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.643952 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.726348 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.778444 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.857530 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.867812 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.444944 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" path="/var/lib/kubelet/pods/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27/volumes" Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.446056 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebe5f1b-1a13-4172-8662-aeae2c43ade1" path="/var/lib/kubelet/pods/6ebe5f1b-1a13-4172-8662-aeae2c43ade1/volumes" Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.661727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652"} Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.690436 4705 generic.go:334] "Generic (PLEG): container finished" podID="616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab" containerID="9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9" exitCode=0 Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.690517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerDied","Data":"9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9"} Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.693530 4705 generic.go:334] "Generic (PLEG): container finished" podID="50502923-5ef9-46a9-a23d-abe8face6040" containerID="103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14" exitCode=0 Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.693592 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerDied","Data":"103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14"} Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.725201 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.726250 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.732862 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.721569 4705 generic.go:334] "Generic (PLEG): container finished" podID="be538ffa-cfea-445d-872f-1a0a68b77a50" containerID="4c915b1f90c65a4caa63253e81c4e410b1a0159bd352e907ae1ccd0cccab77c8" exitCode=0 Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.722219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerDied","Data":"4c915b1f90c65a4caa63253e81c4e410b1a0159bd352e907ae1ccd0cccab77c8"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.726802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"3367abbf0870ea65517cf5b9c106672260204eaaef10c2aad38394ac50aff67a"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.729089 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"98f6035a3a5636fd5198ef4888309ec6d2bd09b27036a32bfa21a4009719306d"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.731692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"b3f125af6a38042cb9c2384da06d61de171663eea02070c2ed22d753c10aa053"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.733378 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8" event={"ID":"4374b7db-8c42-42e1-b2bd-c633bdd8edfd","Type":"ContainerStarted","Data":"71cc5ceacaa32910838197e021592ade6a1934e655ca603291b4135afb0575dd"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.733522 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.736709 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" event={"ID":"72697fcc-cd94-4ba9-9479-cb5bd82d83ab","Type":"ContainerStarted","Data":"72bd449a03d95e84997105dc6d7b60e7eef6f7f195cb1a8094b8aa8ab7f95ed1"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.738403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"b704a3c240e305626a96fee64b859e636d024e4a1605be96661ede88460480c6"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.744697 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.813994 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" podStartSLOduration=26.646955233 podStartE2EDuration="32.813947676s" podCreationTimestamp="2026-02-16 15:12:09 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.413511637 +0000 UTC m=+1148.598488713" lastFinishedPulling="2026-02-16 15:12:40.58050408 +0000 UTC m=+1154.765481156" observedRunningTime="2026-02-16 15:12:41.76288576 +0000 UTC m=+1155.947862836" watchObservedRunningTime="2026-02-16 15:12:41.813947676 +0000 UTC m=+1155.998924752" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.832234 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-crbv8" podStartSLOduration=24.67030091 podStartE2EDuration="30.83220947s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.435198697 +0000 UTC m=+1148.620175783" lastFinishedPulling="2026-02-16 15:12:40.597107267 +0000 UTC m=+1154.782084343" observedRunningTime="2026-02-16 15:12:41.826040137 +0000 UTC m=+1156.011017223" watchObservedRunningTime="2026-02-16 15:12:41.83220947 +0000 UTC m=+1156.017186546" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.850095 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.069651888 podStartE2EDuration="38.850068443s" podCreationTimestamp="2026-02-16 15:12:03 +0000 UTC" firstStartedPulling="2026-02-16 15:12:06.44817876 +0000 UTC m=+1120.633155836" lastFinishedPulling="2026-02-16 15:12:34.228595315 +0000 UTC m=+1148.413572391" observedRunningTime="2026-02-16 15:12:41.843242891 +0000 UTC m=+1156.028219977" watchObservedRunningTime="2026-02-16 15:12:41.850068443 +0000 UTC m=+1156.035045519" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.941742 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.098653513 podStartE2EDuration="36.941710331s" podCreationTimestamp="2026-02-16 15:12:05 +0000 UTC" firstStartedPulling="2026-02-16 15:12:07.418092993 +0000 UTC m=+1121.603070069" lastFinishedPulling="2026-02-16 15:12:34.261149821 +0000 UTC m=+1148.446126887" observedRunningTime="2026-02-16 15:12:41.906360076 +0000 UTC m=+1156.091337172" watchObservedRunningTime="2026-02-16 15:12:41.941710331 +0000 UTC m=+1156.126687407" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.956143 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:12:42 crc kubenswrapper[4705]: I0216 15:12:42.753895 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} Feb 16 15:12:43 crc kubenswrapper[4705]: E0216 15:12:43.155455 4705 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.47:57328->38.102.83.47:38595: read tcp 38.102.83.47:57328->38.102.83.47:38595: read: connection reset by peer Feb 16 15:12:43 crc kubenswrapper[4705]: I0216 15:12:43.767460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d"} Feb 16 15:12:43 crc kubenswrapper[4705]: I0216 15:12:43.768926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.790531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.798976 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"cdc54b8b8ee52f0a93f7eebca14a749c54f9f809d78cde49af28ee6f28b31e7d"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"90a0f7bd9d02870fea5ab26b89c7f506367d3cf65f7f8c24d1a8876c85ab1f9b"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799850 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.808117 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"45e1887f2222c622b4e31473b0ff7feaf435d309c41cdd66e82977f0411a341e"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.818669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"82b041e18447f832e42be57b619cfa5f2d216bdb2d56a89acb9aa6d12074ef52"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.924642 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pc9sf" podStartSLOduration=27.608604982 podStartE2EDuration="33.924608187s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.270856564 +0000 UTC m=+1148.455833640" lastFinishedPulling="2026-02-16 15:12:40.586859769 +0000 UTC m=+1154.771836845" observedRunningTime="2026-02-16 15:12:44.868916591 +0000 UTC m=+1159.053893667" watchObservedRunningTime="2026-02-16 15:12:44.924608187 +0000 UTC m=+1159.109585263" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.954982 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.164008579 podStartE2EDuration="33.954953891s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:32.625883562 +0000 UTC m=+1146.810860638" lastFinishedPulling="2026-02-16 15:12:42.416828874 +0000 UTC m=+1156.601805950" observedRunningTime="2026-02-16 15:12:44.919745431 +0000 UTC m=+1159.104722507" watchObservedRunningTime="2026-02-16 15:12:44.954953891 +0000 UTC m=+1159.139930967" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.968084 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.066136325 podStartE2EDuration="30.96806204s" podCreationTimestamp="2026-02-16 15:12:14 +0000 UTC" firstStartedPulling="2026-02-16 15:12:33.509457926 +0000 UTC m=+1147.694435002" lastFinishedPulling="2026-02-16 15:12:42.411383641 +0000 UTC m=+1156.596360717" observedRunningTime="2026-02-16 15:12:44.954530779 +0000 UTC m=+1159.139507875" watchObservedRunningTime="2026-02-16 15:12:44.96806204 +0000 UTC m=+1159.153039116" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.197524 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.198063 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.498748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.624535 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.663887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.829502 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.867152 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.952223 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.143722 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.171715 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.171780 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.206478 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.208707 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.219849 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.258703 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.260526 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.263094 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.265979 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266030 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266427 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.268594 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.316023 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.369907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.369976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370007 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370041 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370069 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370205 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.371116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.372176 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.372483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.395227 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.479311 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483673 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.484808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.484965 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.485187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.486145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.487390 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.511442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.517908 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.517993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.522225 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.524267 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.552602 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.598268 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.623898 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.649574 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.677853 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.677886 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.699861 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.804723 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.802587 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805740 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.869209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" event={"ID":"d7dbc743-b65f-414c-adef-c3e8e158e4dc","Type":"ContainerDied","Data":"cba1b72db61c105e5863e586d645a2f7e94a83ed46db96da197a374840b783e3"} Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.869443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.910308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.910868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911194 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911769 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.912155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.913407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.916455 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.917024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.917725 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.918214 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config" (OuterVolumeSpecName: "config") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.920056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.921428 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k" (OuterVolumeSpecName: "kube-api-access-bms9k") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "kube-api-access-bms9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.942770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.944198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023873 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023907 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023957 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.092184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.276341 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.294527 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.315743 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.327307 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.334906 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.341927 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.342244 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.342402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.346875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jx8cn" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.370562 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.384215 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438948 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438975 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.439023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.439042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541572 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541787 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.542916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.543680 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.545402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.552605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.552655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.557830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.563629 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.616072 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.617756 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.620135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.630879 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.692195 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.694010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.710188 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.743277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757049 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757240 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.779536 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.781150 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.794255 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.859862 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.859985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860236 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.861678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.862339 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.867608 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.869139 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.876242 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.883253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerStarted","Data":"d300d6e6a8e721e23b118ec6cd1d7277765e081fcd0cf727ad7a0cfd4099f2fa"} Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.883957 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.884500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.886431 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jbdgd" event={"ID":"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772","Type":"ContainerStarted","Data":"2d3c1f8b23cb89332cae64e908bf8a38b99bfe8924450d91aaa1b4576a0f68f6"} Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.891589 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.965966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966075 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.967228 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.976393 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.984645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.056941 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.091153 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.096226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.100735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.108275 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.122879 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.253047 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.438710 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" path="/var/lib/kubelet/pods/d7dbc743-b65f-414c-adef-c3e8e158e4dc/volumes" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.492587 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:48 crc kubenswrapper[4705]: W0216 15:12:48.575564 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf9cafcc_24ed_4b80_9483_33f60d92f00f.slice/crio-4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6 WatchSource:0}: Error finding container 4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6: Status 404 returned error can't find the container with id 4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6 Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.709610 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.711796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.723056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.723227 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.730648 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.802690 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.826674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.826834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.827749 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.829288 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.830962 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.842294 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.842882 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.852783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.898731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerStarted","Data":"4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6"} Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.901861 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerID="624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c" exitCode=0 Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.903159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerDied","Data":"624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.032849 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.033419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.103406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.136581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.136661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.137332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.175315 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.235787 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.246736 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.261755 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.272859 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.314177 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.441007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.470740 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.621286 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.751358 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.751906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.752191 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.857240 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config" (OuterVolumeSpecName: "config") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.869224 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.887641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td" (OuterVolumeSpecName: "kube-api-access-4s7td") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "kube-api-access-4s7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.959282 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.980520 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.981734 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.992884 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerStarted","Data":"3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995327 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerDied","Data":"483f41b8e768070c0e3971042788df02650602d14770eb6fc300e60a9f3c1c36"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995360 4705 scope.go:117] "RemoveContainer" containerID="624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995506 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.008800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jbdgd" event={"ID":"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772","Type":"ContainerStarted","Data":"25e5df20b5ac0f419ea672a4b6835dc8eab8bdf24f46ceaaacefb9c081c9f388"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.010758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerStarted","Data":"a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.012605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"ccbae3cf8036f73dabe6b4d81802e346d096084f62a4df545bbbc7c49f750351"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.020665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerStarted","Data":"0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.026084 4705 generic.go:334] "Generic (PLEG): container finished" podID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerID="01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6" exitCode=0 Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.026169 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.032950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerStarted","Data":"b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.054703 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jbdgd" podStartSLOduration=4.054675722 podStartE2EDuration="4.054675722s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:50.03292198 +0000 UTC m=+1164.217899056" watchObservedRunningTime="2026-02-16 15:12:50.054675722 +0000 UTC m=+1164.239652798" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.124837 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.137137 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.390193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.452069 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" path="/var/lib/kubelet/pods/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec/volumes" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.518358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:50 crc kubenswrapper[4705]: W0216 15:12:50.542847 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda486f037_5709_4199_9f76_0cb0c995af25.slice/crio-f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96 WatchSource:0}: Error finding container f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96: Status 404 returned error can't find the container with id f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.059010 4705 generic.go:334] "Generic (PLEG): container finished" podID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerID="931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.059090 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerDied","Data":"931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.064817 4705 generic.go:334] "Generic (PLEG): container finished" podID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerID="d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.065597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerDied","Data":"d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.069125 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerStarted","Data":"f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.072345 4705 generic.go:334] "Generic (PLEG): container finished" podID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerID="8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.072476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerDied","Data":"8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.089621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerStarted","Data":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.090018 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.102868 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerStarted","Data":"688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.108259 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerID="aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.108328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.116938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"db14762a-eebd-41a0-b107-e879fedc05f1","Type":"ContainerStarted","Data":"0cd96d2ab8811d31f81a2459e20cd49de9c11b08a9a5f74ff92a026484ef6d86"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.117705 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.121932 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.121985 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.133251 4705 generic.go:334] "Generic (PLEG): container finished" podID="b2232806-cac7-4787-839b-9bcecac93820" containerID="a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.133756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerDied","Data":"a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.148139 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.28460356 podStartE2EDuration="43.14811692s" podCreationTimestamp="2026-02-16 15:12:08 +0000 UTC" firstStartedPulling="2026-02-16 15:12:18.821045821 +0000 UTC m=+1133.006022907" lastFinishedPulling="2026-02-16 15:12:49.684559191 +0000 UTC m=+1163.869536267" observedRunningTime="2026-02-16 15:12:51.131556044 +0000 UTC m=+1165.316533130" watchObservedRunningTime="2026-02-16 15:12:51.14811692 +0000 UTC m=+1165.333093996" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.221002 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.199647631 podStartE2EDuration="45.22097469s" podCreationTimestamp="2026-02-16 15:12:06 +0000 UTC" firstStartedPulling="2026-02-16 15:12:18.80431125 +0000 UTC m=+1132.989288336" lastFinishedPulling="2026-02-16 15:12:49.825638319 +0000 UTC m=+1164.010615395" observedRunningTime="2026-02-16 15:12:51.191047288 +0000 UTC m=+1165.376024384" watchObservedRunningTime="2026-02-16 15:12:51.22097469 +0000 UTC m=+1165.405951766" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.146148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerStarted","Data":"fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.146937 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.147784 4705 generic.go:334] "Generic (PLEG): container finished" podID="a486f037-5709-4199-9f76-0cb0c995af25" containerID="5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b" exitCode=0 Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.147907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerDied","Data":"5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.150669 4705 generic.go:334] "Generic (PLEG): container finished" podID="f37b9312-710d-49b4-8cc7-3956df176627" containerID="0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765" exitCode=0 Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.150763 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerDied","Data":"0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.153954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"1f99fd45eed6bf685ed300e7f393668468b0c7931b21bb607ffea6c3c1cb525b"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.153979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"279ea7b10b67eb648191adbb17bb2c82178fd214ff50ae04c0dcea64bcdb5bf9"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.154444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.156895 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerStarted","Data":"45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.173461 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" podStartSLOduration=6.173439642 podStartE2EDuration="6.173439642s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:52.167911127 +0000 UTC m=+1166.352888223" watchObservedRunningTime="2026-02-16 15:12:52.173439642 +0000 UTC m=+1166.358416718" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.207559 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" podStartSLOduration=6.207536501 podStartE2EDuration="6.207536501s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:52.194130074 +0000 UTC m=+1166.379107150" watchObservedRunningTime="2026-02-16 15:12:52.207536501 +0000 UTC m=+1166.392513577" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.219542 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.295756624 podStartE2EDuration="5.219518168s" podCreationTimestamp="2026-02-16 15:12:47 +0000 UTC" firstStartedPulling="2026-02-16 15:12:49.527280397 +0000 UTC m=+1163.712257473" lastFinishedPulling="2026-02-16 15:12:51.451041941 +0000 UTC m=+1165.636019017" observedRunningTime="2026-02-16 15:12:52.214706793 +0000 UTC m=+1166.399683889" watchObservedRunningTime="2026-02-16 15:12:52.219518168 +0000 UTC m=+1166.404495244" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.717888 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.871246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"3f443bcd-c93f-4b89-a048-cc92f28f5854\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.871763 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"3f443bcd-c93f-4b89-a048-cc92f28f5854\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.873220 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f443bcd-c93f-4b89-a048-cc92f28f5854" (UID: "3f443bcd-c93f-4b89-a048-cc92f28f5854"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.895455 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q" (OuterVolumeSpecName: "kube-api-access-ngs6q") pod "3f443bcd-c93f-4b89-a048-cc92f28f5854" (UID: "3f443bcd-c93f-4b89-a048-cc92f28f5854"). InnerVolumeSpecName "kube-api-access-ngs6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.979384 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.979429 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.006349 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.018013 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.033296 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080618 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"b2232806-cac7-4787-839b-9bcecac93820\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.081353 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" (UID: "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.081948 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"b2232806-cac7-4787-839b-9bcecac93820\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.083317 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.083581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2232806-cac7-4787-839b-9bcecac93820" (UID: "b2232806-cac7-4787-839b-9bcecac93820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.087440 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj" (OuterVolumeSpecName: "kube-api-access-mjpnj") pod "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" (UID: "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc"). InnerVolumeSpecName "kube-api-access-mjpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.087482 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6" (OuterVolumeSpecName: "kube-api-access-f8vl6") pod "b2232806-cac7-4787-839b-9bcecac93820" (UID: "b2232806-cac7-4787-839b-9bcecac93820"). InnerVolumeSpecName "kube-api-access-f8vl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169119 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerDied","Data":"0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169179 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169137 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerDied","Data":"b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172380 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172459 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180172 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerDied","Data":"3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180295 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184013 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerDied","Data":"a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184050 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184098 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184885 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.186880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.186956 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.188627 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" (UID: "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189043 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189057 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189067 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189078 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.193355 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n" (OuterVolumeSpecName: "kube-api-access-q669n") pod "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" (UID: "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca"). InnerVolumeSpecName "kube-api-access-q669n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.292501 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.541653 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542360 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542400 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542431 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542455 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542462 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542474 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542503 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542511 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542889 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542914 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542937 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542974 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.544057 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.547999 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.565973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.679158 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.741270 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.744214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.781842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.845985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"f37b9312-710d-49b4-8cc7-3956df176627\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.846039 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"f37b9312-710d-49b4-8cc7-3956df176627\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.846966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.847264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f37b9312-710d-49b4-8cc7-3956df176627" (UID: "f37b9312-710d-49b4-8cc7-3956df176627"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.848484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.848636 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.849253 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.852268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5" (OuterVolumeSpecName: "kube-api-access-ftvm5") pod "f37b9312-710d-49b4-8cc7-3956df176627" (UID: "f37b9312-710d-49b4-8cc7-3956df176627"). InnerVolumeSpecName "kube-api-access-ftvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.863482 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.872647 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.950415 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"a486f037-5709-4199-9f76-0cb0c995af25\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.950792 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"a486f037-5709-4199-9f76-0cb0c995af25\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.951520 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.952079 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a486f037-5709-4199-9f76-0cb0c995af25" (UID: "a486f037-5709-4199-9f76-0cb0c995af25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.954429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7" (OuterVolumeSpecName: "kube-api-access-sq9j7") pod "a486f037-5709-4199-9f76-0cb0c995af25" (UID: "a486f037-5709-4199-9f76-0cb0c995af25"). InnerVolumeSpecName "kube-api-access-sq9j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.053751 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.054119 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.200618 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.210002 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerDied","Data":"f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96"} Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.210069 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.213663 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.214084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerDied","Data":"688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4"} Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.214160 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.352296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.252928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerDied","Data":"02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788"} Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.252772 4705 generic.go:334] "Generic (PLEG): container finished" podID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerID="02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788" exitCode=0 Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.253651 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerStarted","Data":"310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130"} Feb 16 15:12:56 crc kubenswrapper[4705]: I0216 15:12:56.589536 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:56 crc kubenswrapper[4705]: I0216 15:12:56.852481 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073093 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:12:57 crc kubenswrapper[4705]: E0216 15:12:57.073823 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073852 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: E0216 15:12:57.073872 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073882 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.074175 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.074214 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.075317 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.087790 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.094562 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.162351 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.209223 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.211329 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.213573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.216877 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.253435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.253867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.281198 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" containerID="cri-o://45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" gracePeriod=10 Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.356729 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357438 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.358841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.376421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.402575 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.459531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.459610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.460560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.478531 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.533676 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.297499 4705 generic.go:334] "Generic (PLEG): container finished" podID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerID="45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" exitCode=0 Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.297570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c"} Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.877524 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.883516 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.906961 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.973119 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.001904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.001983 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002025 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002060 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002127 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.104878 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.110886 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.113391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.134721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.226237 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.237136 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.239072 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.265076 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.323197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.324454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.427426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.427642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.428846 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.460505 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.461687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.463865 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.466358 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.498827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.564584 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.636804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.637022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.647946 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.738336 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.738980 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.741464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.742086 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.742697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3edc4e5d-5b55-47b9-8aba-24b10b827f82" (UID: "3edc4e5d-5b55-47b9-8aba-24b10b827f82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.743430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.762951 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf" (OuterVolumeSpecName: "kube-api-access-hg7gf") pod "3edc4e5d-5b55-47b9-8aba-24b10b827f82" (UID: "3edc4e5d-5b55-47b9-8aba-24b10b827f82"). InnerVolumeSpecName "kube-api-access-hg7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.772474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.846026 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.846062 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.905528 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.966016 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.017802 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.018872 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.018939 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.019008 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="init" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019057 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="init" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.019127 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019180 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019456 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019538 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.029853 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.040896 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.040996 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gs8lf" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.041217 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.044948 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049217 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.068588 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.071223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96" (OuterVolumeSpecName: "kube-api-access-npq96") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "kube-api-access-npq96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.116438 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config" (OuterVolumeSpecName: "config") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.153915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.154815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.154888 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155831 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155881 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155942 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155956 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155966 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.164961 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.203288 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258433 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258629 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258672 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258745 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258781 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258786 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258856 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:00.758833328 +0000 UTC m=+1174.943810394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.259128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.259656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.265677 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.265710 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f656772c32ef3299954509100c551f8dec1696aec746556cecee02eefe5d595/globalmount\"" pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.269706 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.276898 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.298283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357472 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerDied","Data":"310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357518 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357527 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.367356 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.368966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"d300d6e6a8e721e23b118ec6cd1d7277765e081fcd0cf727ad7a0cfd4099f2fa"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.368999 4705 scope.go:117] "RemoveContainer" containerID="45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.369127 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.377714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerStarted","Data":"5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.410040 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.421120 4705 scope.go:117] "RemoveContainer" containerID="01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.433644 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.433683 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.440236 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:13:00 crc kubenswrapper[4705]: W0216 15:13:00.550279 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8b1ad4_1803_403b_bc68_8c6ccb877b11.slice/crio-d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94 WatchSource:0}: Error finding container d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94: Status 404 returned error can't find the container with id d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94 Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.655725 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.685194 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.701008 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.701763 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707190 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707248 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707193 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787270 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787439 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787462 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787755 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787785 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787835 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:01.787818528 +0000 UTC m=+1175.972795604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.878770 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889603 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.891125 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.891754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.895891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.896727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.897569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.901234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.911406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.040556 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.390124 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a2df1c-b87d-4765-b900-e6b165802be2" containerID="e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.390177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerDied","Data":"e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394424 4705 generic.go:334] "Generic (PLEG): container finished" podID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerID="65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394474 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerDied","Data":"65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerStarted","Data":"5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401534 4705 generic.go:334] "Generic (PLEG): container finished" podID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerID="707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerStarted","Data":"d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413443 4705 generic.go:334] "Generic (PLEG): container finished" podID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerID="2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerDied","Data":"2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerStarted","Data":"6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.422841 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerStarted","Data":"55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.422915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerStarted","Data":"37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.506142 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" podStartSLOduration=2.506114974 podStartE2EDuration="2.506114974s" podCreationTimestamp="2026-02-16 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:01.47504142 +0000 UTC m=+1175.660018496" watchObservedRunningTime="2026-02-16 15:13:01.506114974 +0000 UTC m=+1175.691092050" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.573075 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.815472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816274 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816665 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816745 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:03.816726791 +0000 UTC m=+1178.001703857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.439903 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" path="/var/lib/kubelet/pods/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d/volumes" Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441205 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerStarted","Data":"f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerStarted","Data":"dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.442331 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c074c5c-fae9-49f3-8139-adb92b649951" containerID="55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508" exitCode=0 Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.442556 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerDied","Data":"55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.465485 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-zg96k" podStartSLOduration=4.46546243 podStartE2EDuration="4.46546243s" podCreationTimestamp="2026-02-16 15:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:02.461128228 +0000 UTC m=+1176.646105334" watchObservedRunningTime="2026-02-16 15:13:02.46546243 +0000 UTC m=+1176.650439506" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.325623 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.379747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.379902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.382625 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" (UID: "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.383974 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.390614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v" (OuterVolumeSpecName: "kube-api-access-kmn6v") pod "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" (UID: "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7"). InnerVolumeSpecName "kube-api-access-kmn6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454513 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerDied","Data":"5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454880 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454853 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.456986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerDied","Data":"6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.457026 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.458536 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459264 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerDied","Data":"5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459333 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459431 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.487892 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"5c5de6a8-c858-4f91-8833-e012562ee1a3\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488111 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"5c5de6a8-c858-4f91-8833-e012562ee1a3\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"45a2df1c-b87d-4765-b900-e6b165802be2\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"45a2df1c-b87d-4765-b900-e6b165802be2\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.489842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c5de6a8-c858-4f91-8833-e012562ee1a3" (UID: "5c5de6a8-c858-4f91-8833-e012562ee1a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.492426 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.497214 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45a2df1c-b87d-4765-b900-e6b165802be2" (UID: "45a2df1c-b87d-4765-b900-e6b165802be2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.502934 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn" (OuterVolumeSpecName: "kube-api-access-kvpbn") pod "5c5de6a8-c858-4f91-8833-e012562ee1a3" (UID: "5c5de6a8-c858-4f91-8833-e012562ee1a3"). InnerVolumeSpecName "kube-api-access-kvpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.510593 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl" (OuterVolumeSpecName: "kube-api-access-bhxfl") pod "45a2df1c-b87d-4765-b900-e6b165802be2" (UID: "45a2df1c-b87d-4765-b900-e6b165802be2"). InnerVolumeSpecName "kube-api-access-bhxfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595708 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595759 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595775 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595789 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.904627 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.904905 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.905420 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.905488 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:07.905468526 +0000 UTC m=+1182.090445602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.474580 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482644 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerDied","Data":"37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055"} Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482860 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.490950 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.521954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"3c074c5c-fae9-49f3-8139-adb92b649951\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.522205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"3c074c5c-fae9-49f3-8139-adb92b649951\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.523168 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c074c5c-fae9-49f3-8139-adb92b649951" (UID: "3c074c5c-fae9-49f3-8139-adb92b649951"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.523800 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.544066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j" (OuterVolumeSpecName: "kube-api-access-7mv6j") pod "3c074c5c-fae9-49f3-8139-adb92b649951" (UID: "3c074c5c-fae9-49f3-8139-adb92b649951"). InnerVolumeSpecName "kube-api-access-7mv6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.628346 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.100330 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.113282 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.493167 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:06 crc kubenswrapper[4705]: I0216 15:13:06.443432 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" path="/var/lib/kubelet/pods/3edc4e5d-5b55-47b9-8aba-24b10b827f82/volumes" Feb 16 15:13:07 crc kubenswrapper[4705]: I0216 15:13:07.162671 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5cb874789d-44cjq" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" containerID="cri-o://b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" gracePeriod=15 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.358434 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399125 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399197 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399249 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399261 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399274 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399287 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399315 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399326 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400653 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400712 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400771 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400799 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.402047 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.402237 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.412105 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.414843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hkp6m" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528663 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528716 4705 generic.go:334] "Generic (PLEG): container finished" podID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerID="b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" exitCode=2 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528961 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerDied","Data":"b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095"} Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641804 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.656005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.772084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.802693 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.935066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935400 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935432 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935502 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:15.935478147 +0000 UTC m=+1190.120455223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.539389 4705 generic.go:334] "Generic (PLEG): container finished" podID="139788ad-b160-4139-a6af-094e33c581e5" containerID="c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652" exitCode=0 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.539919 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652"} Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.873910 4705 patch_prober.go:28] interesting pod/console-5cb874789d-44cjq container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.874051 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5cb874789d-44cjq" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.228600 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.312843 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.313168 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" containerID="cri-o://fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" gracePeriod=10 Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.561924 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerID="fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" exitCode=0 Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.561989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd"} Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.866024 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.867954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.872238 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.879203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.998536 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.998958 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.999088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.031099 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.106344 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107270 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107909 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.108469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.153729 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd" (OuterVolumeSpecName: "kube-api-access-grvmd") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "kube-api-access-grvmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.158876 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:10 crc kubenswrapper[4705]: E0216 15:13:10.159427 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="init" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159447 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="init" Feb 16 15:13:10 crc kubenswrapper[4705]: E0216 15:13:10.159467 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159473 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159653 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.160440 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.168345 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.168871 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.174062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.176648 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.203218 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.236057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.236172 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242949 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.311983 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.354951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.356107 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.356243 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.357664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358339 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358348 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358761 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config" (OuterVolumeSpecName: "console-config") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358792 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358943 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.359988 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.361750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.361845 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.362947 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.363285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.363806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.379758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw" (OuterVolumeSpecName: "kube-api-access-zlmrw") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "kube-api-access-zlmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396413 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396426 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396437 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396447 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396458 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396466 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396476 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.398340 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.405454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.454925 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config" (OuterVolumeSpecName: "config") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.456051 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.504161 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.508262 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512462 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512477 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512495 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.521601 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.603881 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.604358 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614114 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614330 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614576 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerDied","Data":"2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614631 4705 scope.go:117] "RemoveContainer" containerID="b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.632570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerStarted","Data":"30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.638653 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.638795 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.643163 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerStarted","Data":"8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.655334 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.606765288 podStartE2EDuration="1m9.655311895s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.687546145 +0000 UTC m=+1118.872523221" lastFinishedPulling="2026-02-16 15:12:33.736092752 +0000 UTC m=+1147.921069828" observedRunningTime="2026-02-16 15:13:10.652730273 +0000 UTC m=+1184.837707379" watchObservedRunningTime="2026-02-16 15:13:10.655311895 +0000 UTC m=+1184.840288971" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.673492 4705 scope.go:117] "RemoveContainer" containerID="fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.710730 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bkfjd" podStartSLOduration=2.836500226 podStartE2EDuration="10.710700823s" podCreationTimestamp="2026-02-16 15:13:00 +0000 UTC" firstStartedPulling="2026-02-16 15:13:01.642645904 +0000 UTC m=+1175.827622980" lastFinishedPulling="2026-02-16 15:13:09.516846501 +0000 UTC m=+1183.701823577" observedRunningTime="2026-02-16 15:13:10.688808598 +0000 UTC m=+1184.873785674" watchObservedRunningTime="2026-02-16 15:13:10.710700823 +0000 UTC m=+1184.895677899" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.729603 4705 scope.go:117] "RemoveContainer" containerID="aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.740792 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.758601 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.769487 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.776659 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.120155 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:11 crc kubenswrapper[4705]: W0216 15:13:11.132772 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb68c2080_dd84_406b_ba19_b4cdd136c90e.slice/crio-590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b WatchSource:0}: Error finding container 590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b: Status 404 returned error can't find the container with id 590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.260463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:11 crc kubenswrapper[4705]: W0216 15:13:11.274131 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683ef288_8b6e_4612_be52_d1654bd75098.slice/crio-3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69 WatchSource:0}: Error finding container 3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69: Status 404 returned error can't find the container with id 3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69 Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.668705 4705 generic.go:334] "Generic (PLEG): container finished" podID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerID="e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3" exitCode=0 Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.668819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerDied","Data":"e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.669228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerStarted","Data":"590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.675300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerStarted","Data":"3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.736495 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:11 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:11 crc kubenswrapper[4705]: > Feb 16 15:13:12 crc kubenswrapper[4705]: I0216 15:13:12.440434 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" path="/var/lib/kubelet/pods/5ab25c9f-91f2-46f2-8abf-5004d8c114ad/volumes" Feb 16 15:13:12 crc kubenswrapper[4705]: I0216 15:13:12.441741 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" path="/var/lib/kubelet/pods/cf9cafcc-24ed-4b80-9483-33f60d92f00f/volumes" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.057963 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.214030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"b68c2080-dd84-406b-ba19-b4cdd136c90e\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.214419 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"b68c2080-dd84-406b-ba19-b4cdd136c90e\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.215018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b68c2080-dd84-406b-ba19-b4cdd136c90e" (UID: "b68c2080-dd84-406b-ba19-b4cdd136c90e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.215592 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.220839 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh" (OuterVolumeSpecName: "kube-api-access-fzzvh") pod "b68c2080-dd84-406b-ba19-b4cdd136c90e" (UID: "b68c2080-dd84-406b-ba19-b4cdd136c90e"). InnerVolumeSpecName "kube-api-access-fzzvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.318277 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.718818 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerDied","Data":"590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720942 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720939 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.723941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerStarted","Data":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.750231 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.367763689 podStartE2EDuration="1m5.750211881s" podCreationTimestamp="2026-02-16 15:12:09 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.409806483 +0000 UTC m=+1148.594783579" lastFinishedPulling="2026-02-16 15:13:13.792254695 +0000 UTC m=+1187.977231771" observedRunningTime="2026-02-16 15:13:14.750107588 +0000 UTC m=+1188.935084664" watchObservedRunningTime="2026-02-16 15:13:14.750211881 +0000 UTC m=+1188.935188947" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.782930 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.277171096 podStartE2EDuration="5.782894011s" podCreationTimestamp="2026-02-16 15:13:09 +0000 UTC" firstStartedPulling="2026-02-16 15:13:11.279562354 +0000 UTC m=+1185.464539430" lastFinishedPulling="2026-02-16 15:13:13.785285269 +0000 UTC m=+1187.970262345" observedRunningTime="2026-02-16 15:13:14.765881052 +0000 UTC m=+1188.950858128" watchObservedRunningTime="2026-02-16 15:13:14.782894011 +0000 UTC m=+1188.967871087" Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.737316 4705 generic.go:334] "Generic (PLEG): container finished" podID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerID="86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02" exitCode=0 Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.737449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02"} Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.744153 4705 generic.go:334] "Generic (PLEG): container finished" podID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" exitCode=0 Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.744252 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.880156 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.958796 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959295 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959390 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959497 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:31.959481328 +0000 UTC m=+1206.144458404 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.733121 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:16 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:16 crc kubenswrapper[4705]: > Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.769402 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.770170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.774174 4705 generic.go:334] "Generic (PLEG): container finished" podID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerID="663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d" exitCode=0 Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.774222 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.778229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.779150 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.802735 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371961.05206 podStartE2EDuration="1m15.802716808s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.194567498 +0000 UTC m=+1118.379544574" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.796797701 +0000 UTC m=+1190.981774777" watchObservedRunningTime="2026-02-16 15:13:16.802716808 +0000 UTC m=+1190.987693884" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.840449 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371961.01436 podStartE2EDuration="1m15.840415598s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:05.095426679 +0000 UTC m=+1119.280403755" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.822767702 +0000 UTC m=+1191.007744778" watchObservedRunningTime="2026-02-16 15:13:16.840415598 +0000 UTC m=+1191.025392674" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.008991 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.014280 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283061 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:17 crc kubenswrapper[4705]: E0216 15:13:17.283730 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: E0216 15:13:17.283787 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.284056 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.284087 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.285020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.288866 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.318592 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411808 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.412032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.412285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515044 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515141 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515767 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515886 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.516509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.517961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.518304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.541799 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.605148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:18 crc kubenswrapper[4705]: I0216 15:13:18.800150 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerID="30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718" exitCode=0 Feb 16 15:13:18 crc kubenswrapper[4705]: I0216 15:13:18.800253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerDied","Data":"30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718"} Feb 16 15:13:21 crc kubenswrapper[4705]: I0216 15:13:21.723828 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:21 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:21 crc kubenswrapper[4705]: > Feb 16 15:13:23 crc kubenswrapper[4705]: I0216 15:13:23.703594 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.790489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.875857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerDied","Data":"dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b"} Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.876353 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.876459 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.881548 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f"} Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.882869 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.920631 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371953.934166 podStartE2EDuration="1m22.920610089s" podCreationTimestamp="2026-02-16 15:12:02 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.895820834 +0000 UTC m=+1119.080797910" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:24.91070224 +0000 UTC m=+1199.095679316" watchObservedRunningTime="2026-02-16 15:13:24.920610089 +0000 UTC m=+1199.105587165" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.957941 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958122 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958142 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958185 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958260 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.959112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.959147 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.964449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx" (OuterVolumeSpecName: "kube-api-access-cwncx") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "kube-api-access-cwncx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.969584 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.980823 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.005264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts" (OuterVolumeSpecName: "scripts") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.005387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.010534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061452 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061501 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061516 4705 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061529 4705 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061545 4705 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061555 4705 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061571 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.880991 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.887053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894317 4705 generic.go:334] "Generic (PLEG): container finished" podID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerID="6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e" exitCode=0 Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894445 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerDied","Data":"6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894506 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerStarted","Data":"2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.896675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerStarted","Data":"2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.937049 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2kkpm" podStartSLOduration=4.931009329 podStartE2EDuration="18.937029399s" podCreationTimestamp="2026-02-16 15:13:07 +0000 UTC" firstStartedPulling="2026-02-16 15:13:10.55023855 +0000 UTC m=+1184.735215626" lastFinishedPulling="2026-02-16 15:13:24.55625862 +0000 UTC m=+1198.741235696" observedRunningTime="2026-02-16 15:13:25.930125695 +0000 UTC m=+1200.115102771" watchObservedRunningTime="2026-02-16 15:13:25.937029399 +0000 UTC m=+1200.122006475" Feb 16 15:13:26 crc kubenswrapper[4705]: I0216 15:13:26.720745 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-crbv8" Feb 16 15:13:26 crc kubenswrapper[4705]: I0216 15:13:26.909988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.476601 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.533880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.533967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run" (OuterVolumeSpecName: "var-run") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534047 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534146 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534343 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534536 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.535281 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.535976 4705 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536004 4705 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536016 4705 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536029 4705 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536273 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts" (OuterVolumeSpecName: "scripts") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.566390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q" (OuterVolumeSpecName: "kube-api-access-n7w4q") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "kube-api-access-n7w4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.638309 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.638855 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917483 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917480 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerDied","Data":"2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a"} Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917557 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.587859 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.597387 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.721424 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:28 crc kubenswrapper[4705]: E0216 15:13:28.722053 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722074 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: E0216 15:13:28.722093 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722102 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722347 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722398 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.723235 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.725708 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.745406 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.768571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769051 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769512 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.871710 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873186 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873535 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.875066 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.894629 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.041541 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.554721 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:29 crc kubenswrapper[4705]: W0216 15:13:29.560945 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod397a0852_4076_4e11_bf86_af0ec6b81028.slice/crio-8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c WatchSource:0}: Error finding container 8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c: Status 404 returned error can't find the container with id 8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.795868 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796765 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" containerID="cri-o://3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796841 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" containerID="cri-o://a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796991 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" containerID="cri-o://a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947517 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" exitCode=0 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947559 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" exitCode=0 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947593 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.951177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerStarted","Data":"eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.951231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerStarted","Data":"8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.996108 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-crbv8-config-fwr69" podStartSLOduration=1.996084958 podStartE2EDuration="1.996084958s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:29.983042041 +0000 UTC m=+1204.168019107" watchObservedRunningTime="2026-02-16 15:13:29.996084958 +0000 UTC m=+1204.181062034" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.431928 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" path="/var/lib/kubelet/pods/cecdccc6-64fe-465b-a99e-bd27376c7e32/volumes" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.842258 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.925468 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.925917 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926033 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926487 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926540 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926661 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926703 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926887 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927273 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927791 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927821 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927840 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.934276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.936872 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx" (OuterVolumeSpecName: "kube-api-access-87msx") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "kube-api-access-87msx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.943114 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.943168 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config" (OuterVolumeSpecName: "config") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.949672 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out" (OuterVolumeSpecName: "config-out") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.973493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config" (OuterVolumeSpecName: "web-config") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.974106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "pvc-d7cf3552-166c-4b95-888b-d04078abb8ed". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979111 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" exitCode=0 Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979212 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf"} Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979389 4705 scope.go:117] "RemoveContainer" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.982817 4705 generic.go:334] "Generic (PLEG): container finished" podID="397a0852-4076-4e11-bf86-af0ec6b81028" containerID="eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713" exitCode=0 Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.982879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerDied","Data":"eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713"} Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.032663 4705 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033143 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033166 4705 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033183 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033201 4705 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033217 4705 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033275 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") on node \"crc\" " Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.040770 4705 scope.go:117] "RemoveContainer" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.128846 4705 scope.go:117] "RemoveContainer" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134084 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134267 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d7cf3552-166c-4b95-888b-d04078abb8ed" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed") on node "crc" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134314 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.139677 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.166423 4705 scope.go:117] "RemoveContainer" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.167659 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.184515 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185530 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185580 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185613 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185651 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185694 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="init-config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185728 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="init-config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185752 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186219 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186238 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.193563 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.196013 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bs5tf" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.196577 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197189 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197311 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197431 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.199897 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.200077 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.200520 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.204491 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.206637 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.230949 4705 scope.go:117] "RemoveContainer" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.231714 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": container with ID starting with 3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76 not found: ID does not exist" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.231756 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} err="failed to get container status \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": rpc error: code = NotFound desc = could not find container \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": container with ID starting with 3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.231788 4705 scope.go:117] "RemoveContainer" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.233580 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": container with ID starting with a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e not found: ID does not exist" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.233607 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} err="failed to get container status \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": rpc error: code = NotFound desc = could not find container \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": container with ID starting with a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.233622 4705 scope.go:117] "RemoveContainer" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.234709 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": container with ID starting with a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8 not found: ID does not exist" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.234735 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} err="failed to get container status \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": rpc error: code = NotFound desc = could not find container \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": container with ID starting with a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.234750 4705 scope.go:117] "RemoveContainer" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.236646 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": container with ID starting with f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902 not found: ID does not exist" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.236670 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} err="failed to get container status \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": rpc error: code = NotFound desc = could not find container \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": container with ID starting with f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345164 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345400 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345463 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345638 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345685 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345729 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.448013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449824 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450330 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450579 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451383 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.452042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.453624 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.453658 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/88c6cd7cb604a645ab31c0e76d113b8c44ff69d3e39fcb5b354218108db12562/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.459697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461491 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.462126 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.463004 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.464548 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.469235 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.477603 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.506333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.563174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.002321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.032562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.167078 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.362934 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:32 crc kubenswrapper[4705]: W0216 15:13:32.375911 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ed43376_64ee_4fa7_9e24_00d85997e8c1.slice/crio-8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5 WatchSource:0}: Error finding container 8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5: Status 404 returned error can't find the container with id 8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5 Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.438620 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" path="/var/lib/kubelet/pods/761a74d6-061c-47dd-b376-b6d6a1906382/volumes" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.454684 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519454 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519561 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519648 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519964 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.520049 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run" (OuterVolumeSpecName: "var-run") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.520266 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521013 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521347 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts" (OuterVolumeSpecName: "scripts") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521579 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521601 4705 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521613 4705 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521623 4705 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521635 4705 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.525997 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq" (OuterVolumeSpecName: "kube-api-access-hlfmq") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "kube-api-access-hlfmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.623431 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.651989 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.661930 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.841226 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:32 crc kubenswrapper[4705]: W0216 15:13:32.842353 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c8c609_3b8c_48d1_9731_56451bf10919.slice/crio-b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b WatchSource:0}: Error finding container b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b: Status 404 returned error can't find the container with id b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.846133 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.030481 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.033410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.033437 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.035936 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.039321 4705 generic.go:334] "Generic (PLEG): container finished" podID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerID="2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4" exitCode=0 Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.039389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerDied","Data":"2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.340561 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.735400 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.162055 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:34 crc kubenswrapper[4705]: E0216 15:13:34.162834 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.162973 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.163261 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.164175 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.176500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.276234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.276401 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.378790 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.378923 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.379351 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.380320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.381686 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.387434 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.400267 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.409129 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.459512 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" path="/var/lib/kubelet/pods/397a0852-4076-4e11-bf86-af0ec6b81028/volumes" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.476028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.481185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.481282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.504497 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.506382 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.531507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.550807 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.554614 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.559132 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.562566 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.575643 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.577222 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588597 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.589900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.591102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.591180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.599960 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.609312 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.610808 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.620833 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621176 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621440 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621607 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.637225 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.643755 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.673082 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.676897 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.680497 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.688474 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.692185 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.693944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694016 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.695075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.706211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.715944 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.723923 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.724714 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.737215 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.810980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813343 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813564 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813656 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.822889 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.826673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.829329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.855151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.877915 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.879541 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.886298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.896732 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.898881 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.910951 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.914002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919259 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919390 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.922088 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.934088 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.964486 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.965682 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.013360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027903 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.028931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.029996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.057504 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.062845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.070934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"669cacaec90ca1a7f976320f2337fcae6fc3da525203f5c5f902617c048d5a8c"} Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.070978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6b3ca43012f6a386bbd086c8a82f4fc946ab57411e76aea8b5dd567353cc5cb3"} Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.269083 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.352816 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.425120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.536195 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.622673 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.928897 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.037927 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104ec45d_e95d_40c0_80a8_d59de9e2d45a.slice/crio-8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520 WatchSource:0}: Error finding container 8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520: Status 404 returned error can't find the container with id 8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.092454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerStarted","Data":"32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.098832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerStarted","Data":"8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.110438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerStarted","Data":"942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.112780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerStarted","Data":"80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.115215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerDied","Data":"8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.115244 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.153581 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.169560 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.231632 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00962490_7e63_4ba2_95e5_d95167d392bd.slice/crio-2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402 WatchSource:0}: Error finding container 2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402: Status 404 returned error can't find the container with id 2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.235902 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.246828 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.256313 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0216c47c_a1cb_48d7_a1cd_96bc1e7726b5.slice/crio-8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692 WatchSource:0}: Error finding container 8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692: Status 404 returned error can't find the container with id 8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.304873 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.324071 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392247 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.400001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.403660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs" (OuterVolumeSpecName: "kube-api-access-tchhs") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "kube-api-access-tchhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.444871 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.499111 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.499174 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.500101 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.552363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data" (OuterVolumeSpecName: "config-data") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.606800 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: E0216 15:13:36.962083 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00962490_7e63_4ba2_95e5_d95167d392bd.slice/crio-ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.135356 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerID="018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.135581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerDied","Data":"018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.157749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerStarted","Data":"264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.157804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerStarted","Data":"86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.161206 4705 generic.go:334] "Generic (PLEG): container finished" podID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerID="15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.161254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerDied","Data":"15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.164426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.168984 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerStarted","Data":"80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.189553 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3bfb-account-create-update-r5cz9" podStartSLOduration=3.189526705 podStartE2EDuration="3.189526705s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.175248263 +0000 UTC m=+1211.360225339" watchObservedRunningTime="2026-02-16 15:13:37.189526705 +0000 UTC m=+1211.374503781" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194617 4705 generic.go:334] "Generic (PLEG): container finished" podID="00962490-7e63-4ba2-95e5-d95167d392bd" containerID="ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerDied","Data":"ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerStarted","Data":"2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.197697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerStarted","Data":"9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.197725 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerStarted","Data":"8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.212950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b77cf8fdb4c8919cbaa8f245ebefdf2f966303f558bcfa5fe069e5521b1f4e51"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.213000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"27b52bacba22afeb30b60230c4c94ce40477695471eb40296b023c30ef071902"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.226405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerStarted","Data":"be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.231769 4705 generic.go:334] "Generic (PLEG): container finished" podID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerID="3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.231845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerDied","Data":"3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238399 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerStarted","Data":"bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerStarted","Data":"d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238473 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.307390 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lqlft" podStartSLOduration=3.307340069 podStartE2EDuration="3.307340069s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.267475217 +0000 UTC m=+1211.452452293" watchObservedRunningTime="2026-02-16 15:13:37.307340069 +0000 UTC m=+1211.492317145" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.370079 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ea32-account-create-update-7qwh2" podStartSLOduration=3.370055143 podStartE2EDuration="3.370055143s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.296066942 +0000 UTC m=+1211.481044038" watchObservedRunningTime="2026-02-16 15:13:37.370055143 +0000 UTC m=+1211.555032219" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.379006 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-fb6f-account-create-update-sg7lm" podStartSLOduration=3.378985114 podStartE2EDuration="3.378985114s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.318802551 +0000 UTC m=+1211.503779637" watchObservedRunningTime="2026-02-16 15:13:37.378985114 +0000 UTC m=+1211.563962190" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.634527 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:37 crc kubenswrapper[4705]: E0216 15:13:37.636720 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.636892 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.637390 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.639151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.696475 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.744757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.745974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746431 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849408 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849488 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.852702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.852966 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.853248 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.853261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.898006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.976233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.252340 4705 generic.go:334] "Generic (PLEG): container finished" podID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerID="9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.252774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerDied","Data":"9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.254159 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerID="264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.254256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerDied","Data":"264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.256876 4705 generic.go:334] "Generic (PLEG): container finished" podID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerID="be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.257088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerDied","Data":"be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.261280 4705 generic.go:334] "Generic (PLEG): container finished" podID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerID="bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.261453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerDied","Data":"bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.512873 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:38 crc kubenswrapper[4705]: W0216 15:13:38.516260 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1826cbb_e404_4385_8af6_36eab56118fb.slice/crio-ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221 WatchSource:0}: Error finding container ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221: Status 404 returned error can't find the container with id ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221 Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288357 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1826cbb-e404-4385-8af6-36eab56118fb" containerID="24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f" exitCode=0 Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288487 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f"} Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerStarted","Data":"ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221"} Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.586040 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.724654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"00962490-7e63-4ba2-95e5-d95167d392bd\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.725311 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"00962490-7e63-4ba2-95e5-d95167d392bd\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.726191 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00962490-7e63-4ba2-95e5-d95167d392bd" (UID: "00962490-7e63-4ba2-95e5-d95167d392bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.726582 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.750475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp" (OuterVolumeSpecName: "kube-api-access-mc6sp") pod "00962490-7e63-4ba2-95e5-d95167d392bd" (UID: "00962490-7e63-4ba2-95e5-d95167d392bd"). InnerVolumeSpecName "kube-api-access-mc6sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.751152 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.832952 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.897593 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.911932 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934161 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934412 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" (UID: "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.936625 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.958503 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.974062 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn" (OuterVolumeSpecName: "kube-api-access-2bsgn") pod "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" (UID: "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f"). InnerVolumeSpecName "kube-api-access-2bsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.039615 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" (UID: "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041272 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041409 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" (UID: "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.042535 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae5e7e5c-9868-457d-872b-ec1d3f34449a" (UID: "ae5e7e5c-9868-457d-872b-ec1d3f34449a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.043846 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.044022 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.044483 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.045002 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.052657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh" (OuterVolumeSpecName: "kube-api-access-5wbjh") pod "ae5e7e5c-9868-457d-872b-ec1d3f34449a" (UID: "ae5e7e5c-9868-457d-872b-ec1d3f34449a"). InnerVolumeSpecName "kube-api-access-5wbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.052844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx" (OuterVolumeSpecName: "kube-api-access-xm6nx") pod "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" (UID: "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5"). InnerVolumeSpecName "kube-api-access-xm6nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.054505 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz" (OuterVolumeSpecName: "kube-api-access-rwlqz") pod "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" (UID: "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d"). InnerVolumeSpecName "kube-api-access-rwlqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.076347 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.082999 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.098181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151469 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151837 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151851 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253188 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"601c1c55-db3a-443a-bd6b-7d76e884697c\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"f5b60553-5a29-4222-ad99-2f33cedd3879\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"f5b60553-5a29-4222-ad99-2f33cedd3879\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253523 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"601c1c55-db3a-443a-bd6b-7d76e884697c\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253595 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254130 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "601c1c55-db3a-443a-bd6b-7d76e884697c" (UID: "601c1c55-db3a-443a-bd6b-7d76e884697c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "104ec45d-e95d-40c0-80a8-d59de9e2d45a" (UID: "104ec45d-e95d-40c0-80a8-d59de9e2d45a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254414 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5b60553-5a29-4222-ad99-2f33cedd3879" (UID: "f5b60553-5a29-4222-ad99-2f33cedd3879"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.258967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl" (OuterVolumeSpecName: "kube-api-access-f5spl") pod "104ec45d-e95d-40c0-80a8-d59de9e2d45a" (UID: "104ec45d-e95d-40c0-80a8-d59de9e2d45a"). InnerVolumeSpecName "kube-api-access-f5spl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.259400 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx" (OuterVolumeSpecName: "kube-api-access-phnmx") pod "f5b60553-5a29-4222-ad99-2f33cedd3879" (UID: "f5b60553-5a29-4222-ad99-2f33cedd3879"). InnerVolumeSpecName "kube-api-access-phnmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.261552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd" (OuterVolumeSpecName: "kube-api-access-9npnd") pod "601c1c55-db3a-443a-bd6b-7d76e884697c" (UID: "601c1c55-db3a-443a-bd6b-7d76e884697c"). InnerVolumeSpecName "kube-api-access-9npnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.304262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerDied","Data":"80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.305795 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.304667 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerDied","Data":"2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309803 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309844 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerDied","Data":"8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313496 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313594 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315522 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerDied","Data":"32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315651 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.317993 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.318011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerDied","Data":"8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.318075 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerDied","Data":"d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319839 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319898 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.323422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6eb12f019878e65aca4af6ec05215ffb4fdac243dce661df95c4668ac3f9270d"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerDied","Data":"86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327436 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327533 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.341801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerStarted","Data":"5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.342820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerDied","Data":"942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349436 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349495 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365230 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365494 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365597 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365658 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365715 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365843 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.380291 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podStartSLOduration=3.380264758 podStartE2EDuration="3.380264758s" podCreationTimestamp="2026-02-16 15:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:40.362670373 +0000 UTC m=+1214.547647449" watchObservedRunningTime="2026-02-16 15:13:40.380264758 +0000 UTC m=+1214.565241834" Feb 16 15:13:41 crc kubenswrapper[4705]: I0216 15:13:41.371509 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"55c0bab29c8b98919f346e968526e39d942f2fe6ba8f4666849596b395ec332a"} Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.397024 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ed43376-64ee-4fa7-9e24-00d85997e8c1" containerID="9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049" exitCode=0 Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.397165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerDied","Data":"9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049"} Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.756791 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.418253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"95af8624eceea65afe9b4e1dc2ea480c5f5a5096093f129be79d6604f592e37b"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerStarted","Data":"01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443289 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b2930a5272042600ed06076da74ab456dabb5a50c8ec9bceea362fa528cf4465"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6b633566da895b567918e8c2ae82559ba9c75d5c76f55c60b3e5f75d8633e7d5"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.451942 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gmlkp" podStartSLOduration=2.905517526 podStartE2EDuration="10.451920891s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="2026-02-16 15:13:36.244238034 +0000 UTC m=+1210.429215110" lastFinishedPulling="2026-02-16 15:13:43.790641389 +0000 UTC m=+1217.975618475" observedRunningTime="2026-02-16 15:13:44.450059759 +0000 UTC m=+1218.635036835" watchObservedRunningTime="2026-02-16 15:13:44.451920891 +0000 UTC m=+1218.636897967" Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.491685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"654c94a7a0cff329fed2bdd639bfa7a86b985cf83a0ce2ddcdbdafb7bd78f5b5"} Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.495455 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"66f278accad248d50db1bff0edb1e77309684037394874bf31267967c7e4a642"} Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.495544 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"f616ae0b1b66a1d1c75d6e06fef5b771f580c6a8c7c6f7bae1c3ceecf3195e07"} Feb 16 15:13:47 crc kubenswrapper[4705]: I0216 15:13:47.537864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"dd338201f1295e86847580333ba7e4be8606e52f0c3784fd62e242f21730cb84"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:47.979690 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.056247 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.056972 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-zg96k" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" containerID="cri-o://f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" gracePeriod=10 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.576580 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"e86082918e26262b09bf98023c01f770a38c7b4039714ada4b9cbb4796204dd8"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.579246 4705 generic.go:334] "Generic (PLEG): container finished" podID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerID="f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" exitCode=0 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.579428 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.582391 4705 generic.go:334] "Generic (PLEG): container finished" podID="d65b4384-a678-4002-9583-7f89082af14a" containerID="01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e" exitCode=0 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.582466 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerDied","Data":"01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.013420 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.085835 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086096 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086159 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086256 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086306 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.091867 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj" (OuterVolumeSpecName: "kube-api-access-kqtrj") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "kube-api-access-kqtrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.146619 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config" (OuterVolumeSpecName: "config") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.147963 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.151021 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.165093 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189541 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189586 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189602 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189615 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189632 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.599794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"5345d2d2965fa27c8b3c6897875843cd5e66e7db0b292dfc11d468f661399df9"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.601727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"9a603b01c759703e43b1501dd3ebd5a7147577da2597b51c3fe4e9bb144608a6"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602432 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602463 4705 scope.go:117] "RemoveContainer" containerID="f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.611859 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"54f55d40a2139e9694d2d9eef26202b2ed81d8cd9dab629264ea8cf4c1c1274f"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.612239 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"c89633df1ef1ea656b5d1ea07655513c6c01edb2957d15b0346a24143ccb478a"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.645989 4705 scope.go:117] "RemoveContainer" containerID="707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.667991 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.299302492 podStartE2EDuration="51.667964635s" podCreationTimestamp="2026-02-16 15:12:58 +0000 UTC" firstStartedPulling="2026-02-16 15:13:32.84586877 +0000 UTC m=+1207.030845846" lastFinishedPulling="2026-02-16 15:13:45.214530913 +0000 UTC m=+1219.399507989" observedRunningTime="2026-02-16 15:13:49.656975406 +0000 UTC m=+1223.841952482" watchObservedRunningTime="2026-02-16 15:13:49.667964635 +0000 UTC m=+1223.852941711" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.712735 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.712713534 podStartE2EDuration="18.712713534s" podCreationTimestamp="2026-02-16 15:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:49.701796977 +0000 UTC m=+1223.886774053" watchObservedRunningTime="2026-02-16 15:13:49.712713534 +0000 UTC m=+1223.897690610" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.739734 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.750441 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.987827 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988697 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988719 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988748 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988756 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988768 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988774 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="init" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988798 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="init" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988843 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988850 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989209 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989222 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989241 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989248 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989267 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989275 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989292 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989299 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989326 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989334 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989533 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989548 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989560 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989574 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989588 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989599 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989607 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989615 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989624 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.990796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.001862 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.025758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.030965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133613 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133716 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133758 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133853 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135590 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135696 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135696 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.136215 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.136574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.147337 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.158561 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.240351 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.245946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.246169 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.250734 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv" (OuterVolumeSpecName: "kube-api-access-46kwv") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "kube-api-access-46kwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.272855 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.292258 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data" (OuterVolumeSpecName: "config-data") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.327316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350444 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350492 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350505 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.474334 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" path="/var/lib/kubelet/pods/af8b1ad4-1803-403b-bc68-8c6ccb877b11/volumes" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628064 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerDied","Data":"80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558"} Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628451 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628126 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.882413 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.933562 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:50 crc kubenswrapper[4705]: E0216 15:13:50.934113 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.934137 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.934350 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.937722 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.952871 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953231 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953385 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953498 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.967207 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972539 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972682 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.992536 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.078849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.078927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.087797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.090393 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.090639 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.101541 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.104956 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.105318 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.106853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.143973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185806 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185882 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.186298 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.227860 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.229617 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.233406 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7v2x2" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.233553 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.281917 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290237 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290388 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.291223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.292090 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.292625 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.293157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.293703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.322731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.374585 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.376345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7rvmg" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394095 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394204 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.396102 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.397754 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.403135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.403528 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.404325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.407280 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.408497 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6g79l" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.417900 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.435927 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.435966 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.441996 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.466313 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.469135 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.471145 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.484984 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fhnl" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.485214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.486866 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517786 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518174 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518354 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518398 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.531916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.535638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.546883 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.551732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.556627 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.563497 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.585807 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.590174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.633184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.636487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.636542 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646708 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647149 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.661787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.665571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.665638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.682534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.683420 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.705045 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716147 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xbqk5" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716485 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.725850 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.736732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.743679 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.750624 4705 generic.go:334] "Generic (PLEG): container finished" podID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerID="05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d" exitCode=0 Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.757682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771394 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.775764 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.775846 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.786146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.789854 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerDied","Data":"05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d"} Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.790066 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerStarted","Data":"030bdfe61a394a989ef6031694c0452fbb492551573dac47e20d613416b7d1f6"} Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.790220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.792016 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.802114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.807592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.812919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.818672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.836187 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.847594 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.864221 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.867779 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.868478 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.869574 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.873297 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.873821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875005 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875091 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875576 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.978929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979433 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979512 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979538 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979578 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979617 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979657 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979753 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.980526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.980646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.981086 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.981855 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.024485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082346 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082404 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.083027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.083261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.087310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.087958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.088652 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.092776 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.110790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.146006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.148352 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.155657 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hkp6m" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.155916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.157355 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.158878 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.176108 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.194048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.228103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.290207 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.292235 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.296318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298855 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.307544 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.358675 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480242 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480270 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480307 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480329 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480474 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480491 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480534 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480552 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480733 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480762 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.481303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.494589 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.502410 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.542436 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.560695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.562502 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.562553 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.563104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589085 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589217 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589967 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.591278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.599618 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.605695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.619272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.660739 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666252 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666720 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666744 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.670622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.671595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.797728 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.805151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.814322 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerStarted","Data":"a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367"} Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.843958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.881416 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.951390 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.983860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.019246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020192 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020251 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.021713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.021881 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.036632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w" (OuterVolumeSpecName: "kube-api-access-hhc8w") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "kube-api-access-hhc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.111546 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.120020 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config" (OuterVolumeSpecName: "config") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.121812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.125223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.140099 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.145907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.150718 4705 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.150758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151065 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151098 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151107 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151143 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151155 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151163 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.166134 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.178652 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72538f80_8a9f_451f_9653_4f1faeec593c.slice/crio-fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec WatchSource:0}: Error finding container fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec: Status 404 returned error can't find the container with id fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.192269 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.210968 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod302aee2f_61be_439f_a04e_356243bb65b6.slice/crio-220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf WatchSource:0}: Error finding container 220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf: Status 404 returned error can't find the container with id 220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.356314 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.372246 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.412287 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.584407 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.593901 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.604404 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee1b858_e5e9_4163_9fe6_e503be62c4f7.slice/crio-0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6 WatchSource:0}: Error finding container 0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6: Status 404 returned error can't find the container with id 0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6 Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.856559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerStarted","Data":"1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.875835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerStarted","Data":"fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.886112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerStarted","Data":"f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.892200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerStarted","Data":"5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.916440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerStarted","Data":"0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.936011 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vj8fn" podStartSLOduration=3.935987281 podStartE2EDuration="3.935987281s" podCreationTimestamp="2026-02-16 15:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:53.91176312 +0000 UTC m=+1228.096740206" watchObservedRunningTime="2026-02-16 15:13:53.935987281 +0000 UTC m=+1228.120964347" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.944969 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"1f91f91f4ee1690f46dee7379d3b5f6f9664f4c57d16ad81e7ef1f99a61e9417"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.947914 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.949394 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerDied","Data":"030bdfe61a394a989ef6031694c0452fbb492551573dac47e20d613416b7d1f6"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.949470 4705 scope.go:117] "RemoveContainer" containerID="05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.971060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerStarted","Data":"25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.973979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerStarted","Data":"220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.974158 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980829 4705 generic.go:334] "Generic (PLEG): container finished" podID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerID="4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e" exitCode=0 Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerDied","Data":"4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerStarted","Data":"d10563f7b1298c4c3f2217ca2ab08353ecf9c50a08e788908c9aa92642c5aac7"} Feb 16 15:13:54 crc kubenswrapper[4705]: W0216 15:13:54.116049 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e0f96f_ae09_4238_9d36_1eafc315ed7e.slice/crio-8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e WatchSource:0}: Error finding container 8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e: Status 404 returned error can't find the container with id 8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.118505 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:54 crc kubenswrapper[4705]: W0216 15:13:54.138539 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode059220a_f230_42fe_b1bf_b19be7abd7e1.slice/crio-c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3 WatchSource:0}: Error finding container c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3: Status 404 returned error can't find the container with id c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3 Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.174795 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.199812 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.473891 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" path="/var/lib/kubelet/pods/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0/volumes" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.698363 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.736655 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.843742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851053 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851438 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851925 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.894030 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj" (OuterVolumeSpecName: "kube-api-access-twbsj") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "kube-api-access-twbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.904310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.910701 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.916562 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.961620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.961722 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config" (OuterVolumeSpecName: "config") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965909 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965937 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965949 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965959 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965969 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.977768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.071665 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.075866 4705 generic.go:334] "Generic (PLEG): container finished" podID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerID="bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3" exitCode=0 Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.075955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.083337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.084970 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.124153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerStarted","Data":"2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.127687 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.135301 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.136979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerDied","Data":"d10563f7b1298c4c3f2217ca2ab08353ecf9c50a08e788908c9aa92642c5aac7"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.137039 4705 scope.go:117] "RemoveContainer" containerID="4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.159057 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-76rfw" podStartSLOduration=4.159033495 podStartE2EDuration="4.159033495s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:55.145088743 +0000 UTC m=+1229.330065809" watchObservedRunningTime="2026-02-16 15:13:55.159033495 +0000 UTC m=+1229.344010571" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.289356 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.306208 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.172765 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerStarted","Data":"029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.174024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.177275 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.181878 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.202757 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podStartSLOduration=5.202731758 podStartE2EDuration="5.202731758s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:56.200587198 +0000 UTC m=+1230.385564274" watchObservedRunningTime="2026-02-16 15:13:56.202731758 +0000 UTC m=+1230.387708824" Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.486920 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" path="/var/lib/kubelet/pods/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64/volumes" Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.247552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7"} Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.247806 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" containerID="cri-o://6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.248403 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" containerID="cri-o://c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0"} Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266303 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" containerID="cri-o://d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266479 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" containerID="cri-o://1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.279282 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.279265714 podStartE2EDuration="6.279265714s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:57.276808264 +0000 UTC m=+1231.461785360" watchObservedRunningTime="2026-02-16 15:13:57.279265714 +0000 UTC m=+1231.464242790" Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.314625 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.314604227 podStartE2EDuration="6.314604227s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:57.310200114 +0000 UTC m=+1231.495177210" watchObservedRunningTime="2026-02-16 15:13:57.314604227 +0000 UTC m=+1231.499581303" Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285131 4705 generic.go:334] "Generic (PLEG): container finished" podID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerID="c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" exitCode=0 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285550 4705 generic.go:334] "Generic (PLEG): container finished" podID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerID="6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291765 4705 generic.go:334] "Generic (PLEG): container finished" podID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerID="1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291804 4705 generic.go:334] "Generic (PLEG): container finished" podID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerID="d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291865 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1"} Feb 16 15:13:59 crc kubenswrapper[4705]: I0216 15:13:59.323838 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerDied","Data":"f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a"} Feb 16 15:13:59 crc kubenswrapper[4705]: I0216 15:13:59.324488 4705 generic.go:334] "Generic (PLEG): container finished" podID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerID="f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a" exitCode=0 Feb 16 15:14:01 crc kubenswrapper[4705]: I0216 15:14:01.565269 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:01 crc kubenswrapper[4705]: I0216 15:14:01.572819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.196988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.261264 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.261659 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" containerID="cri-o://5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" gracePeriod=10 Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.371959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.977693 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 16 15:14:03 crc kubenswrapper[4705]: I0216 15:14:03.384973 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1826cbb-e404-4385-8af6-36eab56118fb" containerID="5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" exitCode=0 Feb 16 15:14:03 crc kubenswrapper[4705]: I0216 15:14:03.385073 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5"} Feb 16 15:14:07 crc kubenswrapper[4705]: I0216 15:14:07.977416 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.007092 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.030222 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140781 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140858 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140966 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141048 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141396 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141597 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141812 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141920 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141996 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.143950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs" (OuterVolumeSpecName: "logs") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.145590 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.153550 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts" (OuterVolumeSpecName: "scripts") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.161597 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj" (OuterVolumeSpecName: "kube-api-access-lnggj") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "kube-api-access-lnggj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.162510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t" (OuterVolumeSpecName: "kube-api-access-mqd8t") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "kube-api-access-mqd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.163410 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.164481 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.175995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (OuterVolumeSpecName: "glance") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.206184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts" (OuterVolumeSpecName: "scripts") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.206242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.230900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data" (OuterVolumeSpecName: "config-data") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254233 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254639 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254649 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254667 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254678 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254703 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254712 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254728 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255468 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255486 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255500 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.259272 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data" (OuterVolumeSpecName: "config-data") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.264753 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.270655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.294897 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.295267 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e") on node "crc" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357297 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357339 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357353 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357716 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e"} Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485555 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485569 4705 scope.go:117] "RemoveContainer" containerID="c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490131 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerDied","Data":"a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367"} Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490176 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.535736 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.548425 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559017 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559676 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559698 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559717 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559727 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559777 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559786 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559802 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559810 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559828 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559834 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560078 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560102 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560119 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560128 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560137 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.561442 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.564567 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.565090 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.568183 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.666253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.666594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769689 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769769 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769797 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770987 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.774953 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.775018 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776045 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776097 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.777310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.788580 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.826336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.884359 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.177773 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.189422 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.259937 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.261820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.264432 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.264815 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266172 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266172 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266256 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.323628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395661 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395938 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396558 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498798 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.499028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.506148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.506221 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.509087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.514196 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.514914 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.524161 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.591513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:12 crc kubenswrapper[4705]: I0216 15:14:12.434334 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" path="/var/lib/kubelet/pods/404be51c-5189-4fe5-a795-3e4cf4146f9d/volumes" Feb 16 15:14:12 crc kubenswrapper[4705]: I0216 15:14:12.436029 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" path="/var/lib/kubelet/pods/89e0f96f-ae09-4238-9d36-1eafc315ed7e/volumes" Feb 16 15:14:14 crc kubenswrapper[4705]: I0216 15:14:14.588845 4705 generic.go:334] "Generic (PLEG): container finished" podID="baaef700-c962-494f-bee0-67990bf8bd84" containerID="2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337" exitCode=0 Feb 16 15:14:14 crc kubenswrapper[4705]: I0216 15:14:14.589129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerDied","Data":"2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337"} Feb 16 15:14:17 crc kubenswrapper[4705]: I0216 15:14:17.977989 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Feb 16 15:14:17 crc kubenswrapper[4705]: I0216 15:14:17.978908 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:19 crc kubenswrapper[4705]: E0216 15:14:19.586727 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 15:14:19 crc kubenswrapper[4705]: E0216 15:14:19.587294 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf5hf4h57fh576h677h5f7h664h5bfh88h67dh656h675h5f9h5bdh658hb9h69hfdh57bh59dhf7hfch5f5h7hf7h64dh57dh5ffh5ffh7bh57ch597q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4tkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b1b8bc91-daf7-4fa0-aad2-7d14527c2298): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.048953 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.049179 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nfsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4vj9p_openstack(302aee2f-61be-439f-a04e-356243bb65b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.050486 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4vj9p" podUID="302aee2f-61be-439f-a04e-356243bb65b6" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.191352 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.209517 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.274989 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298620 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298705 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299491 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299650 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299737 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299805 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.301941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.305560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts" (OuterVolumeSpecName: "scripts") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.305825 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs" (OuterVolumeSpecName: "logs") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.307145 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85" (OuterVolumeSpecName: "kube-api-access-g6n85") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "kube-api-access-g6n85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.327743 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (OuterVolumeSpecName: "glance") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.346488 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.368893 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.377554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data" (OuterVolumeSpecName: "config-data") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.402864 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.402943 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403001 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403140 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403161 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403346 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403927 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403957 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403969 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403979 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403989 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403998 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.404006 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.404015 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.408962 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp" (OuterVolumeSpecName: "kube-api-access-ptwqp") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "kube-api-access-ptwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.409632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z" (OuterVolumeSpecName: "kube-api-access-6kc2z") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "kube-api-access-6kc2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.439431 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.439610 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157") on node "crc" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.442768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config" (OuterVolumeSpecName: "config") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.448002 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.465538 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.469826 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config" (OuterVolumeSpecName: "config") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.481797 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.485121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.506953 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.506984 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507000 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507015 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507026 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507039 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507054 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507066 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507080 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.677297 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.677318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.681523 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.681587 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerDied","Data":"5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683935 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683952 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.686599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4vj9p" podUID="302aee2f-61be-439f-a04e-356243bb65b6" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.718027 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.752254 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777114 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="init" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777813 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="init" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777828 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777836 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777854 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777861 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777874 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777881 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777899 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777905 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778161 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778191 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778202 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.779690 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.783850 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.784309 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.789715 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.800671 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.814878 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919470 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919549 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919634 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022728 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022805 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022990 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023089 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.024099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.025038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.026142 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.026210 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.028669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.034900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.039858 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.051906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.053441 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.109558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.121831 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.464262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.467579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.504513 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.643954 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644079 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644295 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.723572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.726125 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.729445 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.729815 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.730073 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7rvmg" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.730249 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.746497 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.747189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.747288 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.749205 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754675 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.755668 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.756574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.757193 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.766277 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.783839 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.793727 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857109 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959369 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959673 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.963769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.965526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.966225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.967352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.977944 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.064655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.201997 4705 scope.go:117] "RemoveContainer" containerID="6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.222492 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.223127 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-scncd_openstack(ddb24908-6026-4fe7-81b6-345402c9398e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.224802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-scncd" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.442944 4705 scope.go:117] "RemoveContainer" containerID="1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.580477 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" path="/var/lib/kubelet/pods/e059220a-f230-42fe-b1bf-b19be7abd7e1/volumes" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.589039 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" path="/var/lib/kubelet/pods/e1826cbb-e404-4385-8af6-36eab56118fb/volumes" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.647640 4705 scope.go:117] "RemoveContainer" containerID="d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.749882 4705 scope.go:117] "RemoveContainer" containerID="5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.769649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-scncd" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.817201 4705 scope.go:117] "RemoveContainer" containerID="24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.883251 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:22 crc kubenswrapper[4705]: W0216 15:14:22.895872 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f8a7c2_28a1_45b0_ac6a_9b6f33ac1a73.slice/crio-c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee WatchSource:0}: Error finding container c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee: Status 404 returned error can't find the container with id c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.979721 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.334960 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.344800 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.448252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.529062 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:23 crc kubenswrapper[4705]: W0216 15:14:23.709482 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod736c4c77_178b_40b8_8f6f_adb8b4b1ea6d.slice/crio-18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251 WatchSource:0}: Error finding container 18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251: Status 404 returned error can't find the container with id 18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251 Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.772892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"15fe536f2d1e7276c5b6aa9bd3efbc8aff43c887dcf49127f48384d48325f958"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.779206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerStarted","Data":"18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.792922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"8d6f6b83879b1871c1ce4b4df4249213068c9c5c2acaf7af7da436588553b117"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.797670 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerStarted","Data":"e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.804050 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.804080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.806902 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerStarted","Data":"1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.809231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerStarted","Data":"e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.884231 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-nz52p" podStartSLOduration=3.996979965 podStartE2EDuration="32.884205898s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.268576198 +0000 UTC m=+1227.453553274" lastFinishedPulling="2026-02-16 15:14:22.155802131 +0000 UTC m=+1256.340779207" observedRunningTime="2026-02-16 15:14:23.866634373 +0000 UTC m=+1258.051611459" watchObservedRunningTime="2026-02-16 15:14:23.884205898 +0000 UTC m=+1258.069182974" Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.897067 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f8fxj" podStartSLOduration=6.279759635 podStartE2EDuration="32.897050169s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.420343207 +0000 UTC m=+1227.605320283" lastFinishedPulling="2026-02-16 15:14:20.037633741 +0000 UTC m=+1254.222610817" observedRunningTime="2026-02-16 15:14:23.886721219 +0000 UTC m=+1258.071698305" watchObservedRunningTime="2026-02-16 15:14:23.897050169 +0000 UTC m=+1258.082027245" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.011355 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.014266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.028864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.029135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.034530 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.193282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194230 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194425 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194567 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.299012 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.304303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.304314 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.308689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.315134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.327009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.327448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.353524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.392323 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.840912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.843000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerStarted","Data":"99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.848322 4705 generic.go:334] "Generic (PLEG): container finished" podID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerID="cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834" exitCode=0 Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.848555 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.864034 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.880819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.880864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.878207 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m8mrp" podStartSLOduration=13.878183181 podStartE2EDuration="13.878183181s" podCreationTimestamp="2026-02-16 15:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:24.869107316 +0000 UTC m=+1259.054084392" watchObservedRunningTime="2026-02-16 15:14:24.878183181 +0000 UTC m=+1259.063160257" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.928741 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77886f8dfb-96bnn" podStartSLOduration=3.928719583 podStartE2EDuration="3.928719583s" podCreationTimestamp="2026-02-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:24.892987748 +0000 UTC m=+1259.077964844" watchObservedRunningTime="2026-02-16 15:14:24.928719583 +0000 UTC m=+1259.113696659" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.273337 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:25 crc kubenswrapper[4705]: W0216 15:14:25.307574 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5639f9d_2d22_47cb_b481_10e88dc7f90f.slice/crio-a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b WatchSource:0}: Error finding container a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b: Status 404 returned error can't find the container with id a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.883794 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/0.log" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.884902 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" exitCode=1 Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.885197 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.886388 4705 scope.go:117] "RemoveContainer" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.895391 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.895438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.905345 4705 generic.go:334] "Generic (PLEG): container finished" podID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerID="e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55" exitCode=0 Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.905426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerDied","Data":"e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.924173 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerStarted","Data":"f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.924517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.938855 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.968612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.968698 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.983031 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.983000801 podStartE2EDuration="5.983000801s" podCreationTimestamp="2026-02-16 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:25.935414414 +0000 UTC m=+1260.120391500" watchObservedRunningTime="2026-02-16 15:14:25.983000801 +0000 UTC m=+1260.167977877" Feb 16 15:14:26 crc kubenswrapper[4705]: I0216 15:14:26.062272 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.06224046 podStartE2EDuration="16.06224046s" podCreationTimestamp="2026-02-16 15:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:26.000418541 +0000 UTC m=+1260.185395617" watchObservedRunningTime="2026-02-16 15:14:26.06224046 +0000 UTC m=+1260.247217536" Feb 16 15:14:26 crc kubenswrapper[4705]: I0216 15:14:26.074289 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" podStartSLOduration=5.074245057 podStartE2EDuration="5.074245057s" podCreationTimestamp="2026-02-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:26.029749526 +0000 UTC m=+1260.214726602" watchObservedRunningTime="2026-02-16 15:14:26.074245057 +0000 UTC m=+1260.259222133" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.030192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37"} Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.031149 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.038219 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.039121 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/0.log" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040089 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" exitCode=1 Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040182 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa"} Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040539 4705 scope.go:117] "RemoveContainer" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.041263 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:27 crc kubenswrapper[4705]: E0216 15:14:27.041830 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.077392 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75d799457-fvqj6" podStartSLOduration=4.077354728 podStartE2EDuration="4.077354728s" podCreationTimestamp="2026-02-16 15:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:27.074452366 +0000 UTC m=+1261.259429442" watchObservedRunningTime="2026-02-16 15:14:27.077354728 +0000 UTC m=+1261.262331804" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.558950 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750025 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750097 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750206 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750238 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750501 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs" (OuterVolumeSpecName: "logs") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.751130 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.764513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts" (OuterVolumeSpecName: "scripts") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.764900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs" (OuterVolumeSpecName: "kube-api-access-m5mzs") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "kube-api-access-m5mzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.854237 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.854284 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.896567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.926843 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data" (OuterVolumeSpecName: "config-data") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.957029 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.957083 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115230 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerDied","Data":"1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c"} Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115292 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115417 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.128427 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:28 crc kubenswrapper[4705]: E0216 15:14:28.129103 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.129122 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.129357 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.130843 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136004 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xbqk5" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136208 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136503 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.138475 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.177762 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.190530 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.191594 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:28 crc kubenswrapper[4705]: E0216 15:14:28.192052 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268680 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371170 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.372027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.380738 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.381320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.382006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.383003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.384101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.394812 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.462080 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.997041 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.205738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837"} Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.209473 4705 generic.go:334] "Generic (PLEG): container finished" podID="72538f80-8a9f-451f-9653-4f1faeec593c" containerID="e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1" exitCode=0 Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.209507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerDied","Data":"e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1"} Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.221731 4705 generic.go:334] "Generic (PLEG): container finished" podID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerID="99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a" exitCode=0 Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.221828 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerDied","Data":"99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a"} Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.744947 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.854797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.854885 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.855113 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.869140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26" (OuterVolumeSpecName: "kube-api-access-ngt26") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "kube-api-access-ngt26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.885485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.887085 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.919632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.956993 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.959121 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.959154 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.964197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.045591 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data" (OuterVolumeSpecName: "config-data") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.061709 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.122550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.122611 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.161614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.175148 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.242801 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerDied","Data":"fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec"} Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244426 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244448 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.245823 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.245893 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.800560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.887179 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.887641 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" containerID="cri-o://029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" gracePeriod=10 Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.194981 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: connect: connection refused" Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.264797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802"} Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.269775 4705 generic.go:334] "Generic (PLEG): container finished" podID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerID="029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" exitCode=0 Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.269913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22"} Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280760 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280792 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280838 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280873 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.951727 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053633 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053679 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.054028 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.059444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts" (OuterVolumeSpecName: "scripts") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.059604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.063284 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.094805 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs" (OuterVolumeSpecName: "kube-api-access-hmrcs") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "kube-api-access-hmrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.119592 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data" (OuterVolumeSpecName: "config-data") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.121558 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160017 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160055 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160066 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160075 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160090 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160099 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerDied","Data":"1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce"} Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300338 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300491 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.898240 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.899124 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.909126 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.922234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.922354 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.925707 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.186588 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:35 crc kubenswrapper[4705]: E0216 15:14:35.187225 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187246 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: E0216 15:14:35.187260 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187266 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187550 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187575 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.188593 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.195499 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.195832 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196092 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196566 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196727 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.197763 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.213590 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299583 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299825 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299972 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.402440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.402843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403122 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.409211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.409916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.410243 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.410860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.420961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.421035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.436337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.441557 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.534345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.196996 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: connect: connection refused" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.760303 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793245 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793930 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.807042 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q" (OuterVolumeSpecName: "kube-api-access-qmx9q") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "kube-api-access-qmx9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.897696 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.037542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.048080 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.059010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.066768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.075617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config" (OuterVolumeSpecName: "config") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102582 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102622 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102633 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102646 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102657 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.109475 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.361628 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.362123 4705 scope.go:117] "RemoveContainer" containerID="029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.361718 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.365797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.366481 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.366540 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.369955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerStarted","Data":"a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.376406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.379121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd49d8b6b-6gdmx" event={"ID":"57b8117e-e668-46a4-a652-8ac2b3e5d8ff","Type":"ContainerStarted","Data":"fab6c925c8353afae688e67a8410205c3190e7069e0e56260188ee57940675ff"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.398051 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4vj9p" podStartSLOduration=3.2440646810000002 podStartE2EDuration="47.398016456s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.255803829 +0000 UTC m=+1227.440780905" lastFinishedPulling="2026-02-16 15:14:37.409755604 +0000 UTC m=+1271.594732680" observedRunningTime="2026-02-16 15:14:38.38605622 +0000 UTC m=+1272.571033306" watchObservedRunningTime="2026-02-16 15:14:38.398016456 +0000 UTC m=+1272.582993532" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.400317 4705 scope.go:117] "RemoveContainer" containerID="bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.437742 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-565b84d684-sh8jq" podStartSLOduration=10.437720153 podStartE2EDuration="10.437720153s" podCreationTimestamp="2026-02-16 15:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:38.424475901 +0000 UTC m=+1272.609452977" watchObservedRunningTime="2026-02-16 15:14:38.437720153 +0000 UTC m=+1272.622697229" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.489010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.520475 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.401048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd49d8b6b-6gdmx" event={"ID":"57b8117e-e668-46a4-a652-8ac2b3e5d8ff","Type":"ContainerStarted","Data":"12f4d905e22f39c8feaca1d87dae8f0d013b97499842894440bd1a9f3a475c76"} Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.405135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerStarted","Data":"c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4"} Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.438797 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cd49d8b6b-6gdmx" podStartSLOduration=4.438772126 podStartE2EDuration="4.438772126s" podCreationTimestamp="2026-02-16 15:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:39.421087888 +0000 UTC m=+1273.606064984" watchObservedRunningTime="2026-02-16 15:14:39.438772126 +0000 UTC m=+1273.623749212" Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.451799 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-scncd" podStartSLOduration=4.489733635 podStartE2EDuration="48.451777271s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.448793458 +0000 UTC m=+1227.633770534" lastFinishedPulling="2026-02-16 15:14:37.410837094 +0000 UTC m=+1271.595814170" observedRunningTime="2026-02-16 15:14:39.450725252 +0000 UTC m=+1273.635702348" watchObservedRunningTime="2026-02-16 15:14:39.451777271 +0000 UTC m=+1273.636754377" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.229564 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.421562 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.453902 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" path="/var/lib/kubelet/pods/8ee1b858-e5e9-4163-9fe6-e503be62c4f7/volumes" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.454742 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.435849 4705 generic.go:334] "Generic (PLEG): container finished" podID="302aee2f-61be-439f-a04e-356243bb65b6" containerID="a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8" exitCode=0 Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.436422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerDied","Data":"a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8"} Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.444313 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.444924 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445471 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" exitCode=1 Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9"} Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445592 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.447162 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:41 crc kubenswrapper[4705]: E0216 15:14:41.447502 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:42 crc kubenswrapper[4705]: I0216 15:14:42.469066 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:14:43 crc kubenswrapper[4705]: I0216 15:14:43.486094 4705 generic.go:334] "Generic (PLEG): container finished" podID="ddb24908-6026-4fe7-81b6-345402c9398e" containerID="c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4" exitCode=0 Feb 16 15:14:43 crc kubenswrapper[4705]: I0216 15:14:43.488274 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerDied","Data":"c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4"} Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.185211 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.218525 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.219204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.219547 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.247609 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz" (OuterVolumeSpecName: "kube-api-access-4nfsz") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "kube-api-access-4nfsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.260284 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.301061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349559 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349627 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349644 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerDied","Data":"220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf"} Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529255 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529293 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.970479 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169039 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169309 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169437 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169469 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169489 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.171346 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.174623 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g" (OuterVolumeSpecName: "kube-api-access-5gc7g") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "kube-api-access-5gc7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.174960 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts" (OuterVolumeSpecName: "scripts") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.175834 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.243224 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data" (OuterVolumeSpecName: "config-data") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.250617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273137 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273183 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273198 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273213 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273227 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.451247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.550450 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551111 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551130 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551166 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551174 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551187 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551195 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551210 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="init" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551217 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="init" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551459 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551483 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551504 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.552837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571708 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerDied","Data":"25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed"} Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571756 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571832 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575100 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575227 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575431 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fhnl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575849 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.582225 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de"} Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586544 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" containerID="cri-o://9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586718 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586772 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" containerID="cri-o://ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586819 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" containerID="cri-o://7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.592870 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593398 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.594201 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.594563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.607295 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.649716 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709590 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709679 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709844 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710101 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710187 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.713715 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.718106 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.733781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.735040 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.744757 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.770544 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.774837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.796077 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812269 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.813072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.817179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.822581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.833486 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.838077 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.854041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.911592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.964750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.971448 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.973898 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.991865 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.005693 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.028146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.028949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029126 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.030575 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.031099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.032859 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.033260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.033689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.072101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.120203 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.134072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.136947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292860 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292950 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.293132 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.294217 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.367959 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.384925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.406925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.424404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.479720 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.612810 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.615940 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626216 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626724 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6g79l" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626874 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626998 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.639889 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.666308 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" exitCode=2 Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.666406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874"} Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.749604 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796541 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796709 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.822475 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.825258 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.875933 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.901147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.905692 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.909842 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.912995 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.916164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.916220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.920701 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.921129 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.953265 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.959976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007157 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007193 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007985 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008236 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.046304 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.048874 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113036 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113213 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.117494 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118166 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.119358 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.119933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.125917 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.130919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.132143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.134439 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.134897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.153890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.166595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.175485 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.275128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.305333 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.349126 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff171da_ce4a_4c88_b7bd_b7b88e6ad322.slice/crio-1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254 WatchSource:0}: Error finding container 1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254: Status 404 returned error can't find the container with id 1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254 Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.396818 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.418642 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f5bc83_36d4_41e0_8b6f_2d0854d7a171.slice/crio-c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee WatchSource:0}: Error finding container c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee: Status 404 returned error can't find the container with id c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.606361 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.705726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"494073a82ffb15d51ca9ccf70ddd818083ecfa9ff2e728289031a38cb377d7c0"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.714005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"ea075161fc0ba88a8c9c3d0eaf5da57991df70ce7cba9dc4943ed932367998a9"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.720631 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerStarted","Data":"c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.723443 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.876429 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.907184 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9fe5954_9b6f_4ba1_b8c5_fe8367c66051.slice/crio-e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1 WatchSource:0}: Error finding container e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1: Status 404 returned error can't find the container with id e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1 Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.966426 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.169666 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:50 crc kubenswrapper[4705]: W0216 15:14:50.198285 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69bc6a88_b325_43bd_af4c_55283723a765.slice/crio-50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7 WatchSource:0}: Error finding container 50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7: Status 404 returned error can't find the container with id 50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7 Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.757735 4705 generic.go:334] "Generic (PLEG): container finished" podID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerID="9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d" exitCode=0 Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.798110 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerDied","Data":"9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.798314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.803288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerStarted","Data":"4cd8d63ef6157fd647119bfab51e4fd5281201daf21b70697f5351220cfe9c1c"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.824606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.832534 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.573486 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.737473 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738215 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738362 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738446 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738500 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.768614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs" (OuterVolumeSpecName: "kube-api-access-6r7hs") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "kube-api-access-6r7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.782764 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.792953 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config" (OuterVolumeSpecName: "config") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.793868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.804244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.805290 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.842988 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843035 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843049 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843061 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843073 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843086 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.850981 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.856123 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" exitCode=0 Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.856190 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerDied","Data":"c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858906 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858917 4705 scope.go:117] "RemoveContainer" containerID="9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863018 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863647 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.868008 4705 generic.go:334] "Generic (PLEG): container finished" podID="541411df-f636-4dab-a4e2-2ecc8933f236" containerID="40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3" exitCode=0 Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.868050 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.897744 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podStartSLOduration=4.897722185 podStartE2EDuration="4.897722185s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:51.883893016 +0000 UTC m=+1286.068870092" watchObservedRunningTime="2026-02-16 15:14:51.897722185 +0000 UTC m=+1286.082699261" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.969184 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.980975 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.065111 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.066388 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:52 crc kubenswrapper[4705]: E0216 15:14:52.066596 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.067839 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.073048 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.192:9696/\": dial tcp 10.217.0.192:9696: connect: connection refused" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.131198 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.445114 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" path="/var/lib/kubelet/pods/56f5bc83-36d4-41e0-8b6f-2d0854d7a171/volumes" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.894235 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e"} Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.901063 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:52 crc kubenswrapper[4705]: E0216 15:14:52.901352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.417645 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.516046 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.516328 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" containerID="cri-o://9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.750427 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:54 crc kubenswrapper[4705]: E0216 15:14:54.751817 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.751900 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.752242 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.753723 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.758700 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.759032 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.769724 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851657 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851875 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.881723 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.886159 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.935051 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.949135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.949678 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" containerID="cri-o://7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.950268 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.950782 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" containerID="cri-o://baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.956508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.961703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.961928 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962337 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962829 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963148 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963329 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.965210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.965524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.966111 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"18b2d58e32816cdf3a5f332aab5d3f8d5c7adef8ee63b9669545a686d9a96ee9"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.966166 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"8c54e77aa75a6d4e90c4a9051bd2351aa28e06ee5a65c76f41472f2d0ad3f455"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.968189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.972700 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.983604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.984169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.984354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.988189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.989930 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.001150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"78738dd29b3820c40646249d22aa469be73b0b7da171598d84211a0e2e406853"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.001219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"f02dbd745e13763de44f811718db3b8c4ba4c2c33d9ecb59872a53ccee0886dc"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.026286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerStarted","Data":"e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.026769 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.037149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.050698 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.050657104 podStartE2EDuration="7.050657104s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:54.991175282 +0000 UTC m=+1289.176152368" watchObservedRunningTime="2026-02-16 15:14:55.050657104 +0000 UTC m=+1289.235634180" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.072643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.072784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080307 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.083170 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.083347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.097229 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.098458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.099147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.099797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.103934 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.110180 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.116379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.144128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.142966 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" podStartSLOduration=3.65196585 podStartE2EDuration="8.142129697s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.159720945 +0000 UTC m=+1283.344698021" lastFinishedPulling="2026-02-16 15:14:53.649884792 +0000 UTC m=+1287.834861868" observedRunningTime="2026-02-16 15:14:55.019121358 +0000 UTC m=+1289.204098444" watchObservedRunningTime="2026-02-16 15:14:55.142129697 +0000 UTC m=+1289.327106773" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.158467 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.017000586 podStartE2EDuration="7.158439636s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.9130172 +0000 UTC m=+1284.097994276" lastFinishedPulling="2026-02-16 15:14:51.05445625 +0000 UTC m=+1285.239433326" observedRunningTime="2026-02-16 15:14:55.04161955 +0000 UTC m=+1289.226596626" watchObservedRunningTime="2026-02-16 15:14:55.158439636 +0000 UTC m=+1289.343416712" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.218138 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" podStartSLOduration=7.218107674 podStartE2EDuration="7.218107674s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:55.06685486 +0000 UTC m=+1289.251831936" watchObservedRunningTime="2026-02-16 15:14:55.218107674 +0000 UTC m=+1289.403084740" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.233042 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68c59b585f-gvjjl" podStartSLOduration=3.964790519 podStartE2EDuration="8.232827348s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.392299296 +0000 UTC m=+1283.577276372" lastFinishedPulling="2026-02-16 15:14:53.660336125 +0000 UTC m=+1287.845313201" observedRunningTime="2026-02-16 15:14:55.136277372 +0000 UTC m=+1289.321254458" watchObservedRunningTime="2026-02-16 15:14:55.232827348 +0000 UTC m=+1289.417804424" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.244266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.042338 4705 generic.go:334] "Generic (PLEG): container finished" podID="69bc6a88-b325-43bd-af4c-55283723a765" containerID="7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" exitCode=143 Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.043588 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9"} Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.176463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:56 crc kubenswrapper[4705]: W0216 15:14:56.178784 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2c420d_8288_48f7_b53e_f480bf6d5a7f.slice/crio-c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4 WatchSource:0}: Error finding container c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4: Status 404 returned error can't find the container with id c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4 Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.484345 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:56 crc kubenswrapper[4705]: W0216 15:14:56.524700 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7edca3b_82f6_4cfb_9781_664afa855ba8.slice/crio-16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02 WatchSource:0}: Error finding container 16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02: Status 404 returned error can't find the container with id 16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02 Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.085867 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"eb9bef067cacc8899ddad2f91d049253141374552bd4696dfbab09ae65a28437"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.086559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"e85400a6e39d43f0fdd9e551002e83837fca9acd0c3de06ca848b1cadbe00920"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"2d8e32a95ce5bb87765b8abffc1cd4ec9203bf1b92b0c04d4ab0889c6cb2e6e5"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101448 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101488 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101509 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.115036 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"6f67e5b7df9c0341ab3be966b2623ff8564c7d207abc503b1a0a866c06b9680d"} Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.115736 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.142648 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66f94f69bf-82g78" podStartSLOduration=4.1426233 podStartE2EDuration="4.1426233s" podCreationTimestamp="2026-02-16 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:58.131792325 +0000 UTC m=+1292.316769411" watchObservedRunningTime="2026-02-16 15:14:58.1426233 +0000 UTC m=+1292.327600376" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.144820 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-675dd58676-vnqw2" podStartSLOduration=4.144805671 podStartE2EDuration="4.144805671s" podCreationTimestamp="2026-02-16 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.141406823 +0000 UTC m=+1291.326383899" watchObservedRunningTime="2026-02-16 15:14:58.144805671 +0000 UTC m=+1292.329782757" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.048312 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.178655 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.335445 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.335763 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" containerID="cri-o://f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" gracePeriod=10 Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.906004 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.044195 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174516 4705 generic.go:334] "Generic (PLEG): container finished" podID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerID="f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" exitCode=0 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218"} Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251"} Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174809 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174863 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" containerID="cri-o://8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" gracePeriod=30 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.175001 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" containerID="cri-o://bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" gracePeriod=30 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.191464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.193545 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.196743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.196819 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.207253 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273403 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273717 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377932 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.379577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.387796 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.401306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.414129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.456664 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479446 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479977 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.511823 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99" (OuterVolumeSpecName: "kube-api-access-vlp99") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "kube-api-access-vlp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.584603 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.628070 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.688732 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config" (OuterVolumeSpecName: "config") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.914097 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.942821 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998037 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998098 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998111 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998124 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.086496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.106114 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.188911 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.199678 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.225658 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerID="8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.225789 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.240547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241675 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241783 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241869 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"8d6f6b83879b1871c1ce4b4df4249213068c9c5c2acaf7af7da436588553b117"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241907 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241810 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313394 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313645 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313771 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.325560 4705 scope.go:117] "RemoveContainer" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.329510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.333708 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65" (OuterVolumeSpecName: "kube-api-access-fgx65") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "kube-api-access-fgx65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.339810 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.353933 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.408576 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.408892 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.432053 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": container with ID starting with d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9 not found: ID does not exist" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.432113 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9"} err="failed to get container status \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": rpc error: code = NotFound desc = could not find container \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": container with ID starting with d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9 not found: ID does not exist" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.432141 4705 scope.go:117] "RemoveContainer" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440445 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440482 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440493 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.442232 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": container with ID starting with 9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e not found: ID does not exist" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.442266 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} err="failed to get container status \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": rpc error: code = NotFound desc = could not find container \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": container with ID starting with 9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e not found: ID does not exist" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.448502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config" (OuterVolumeSpecName: "config") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.450632 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451280 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451301 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451320 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451327 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451339 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451345 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451359 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451378 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451395 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451400 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451451 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="init" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="init" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451654 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451670 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451690 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451709 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451724 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.453120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.485500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.501168 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.509090 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: W0216 15:15:01.522844 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6f056a_614c_4e3d_9bfe_de451b1d951d.slice/crio-bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7 WatchSource:0}: Error finding container bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7: Status 404 returned error can't find the container with id bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.545917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546092 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546229 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546264 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.547310 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.547327 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.653735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.653962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654311 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.657016 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.664815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.665724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.673897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.677853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.684735 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.685182 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.685239 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.687691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.688760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.707767 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.794786 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.255899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerStarted","Data":"12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f"} Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.256257 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerStarted","Data":"bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7"} Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.321627 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" podStartSLOduration=2.321599534 podStartE2EDuration="2.321599534s" podCreationTimestamp="2026-02-16 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:02.293638597 +0000 UTC m=+1296.478615673" watchObservedRunningTime="2026-02-16 15:15:02.321599534 +0000 UTC m=+1296.506576630" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.475897 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" path="/var/lib/kubelet/pods/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d/volumes" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.477032 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b078dc5a-bbed-4006-9d76-370271a27353" path="/var/lib/kubelet/pods/b078dc5a-bbed-4006-9d76-370271a27353/volumes" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.809058 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.298323 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerID="bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" exitCode=0 Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.300583 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.304460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"10b69de5c0b53c7b82189dc1ee98e780b478862bde93d7141a4094e042544984"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.304963 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"d00af7746386b7f352c8fff117ea38852e34da412e852defcda4a225f579e064"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.331449 4705 generic.go:334] "Generic (PLEG): container finished" podID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerID="12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f" exitCode=0 Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.331504 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerDied","Data":"12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.415058 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.552814 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553436 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553553 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553768 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553900 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.558486 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.564391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts" (OuterVolumeSpecName: "scripts") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.564973 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.571740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7" (OuterVolumeSpecName: "kube-api-access-jpwc7") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "kube-api-access-jpwc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.635275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657194 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657677 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657770 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657853 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657961 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.737262 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data" (OuterVolumeSpecName: "config-data") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.763519 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.325625 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.202:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.378826 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1"} Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.378916 4705 scope.go:117] "RemoveContainer" containerID="8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.379144 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406654 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"e772edf09fbb657c700bb40b6eb65545b240b2838a4027119ade34b9d4d3fc40"} Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406936 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.437292 4705 scope.go:117] "RemoveContainer" containerID="bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.513916 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.559861 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.607663 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.608165 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6599894f76-dcwz8" podStartSLOduration=3.608138488 podStartE2EDuration="3.608138488s" podCreationTimestamp="2026-02-16 15:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:04.49301322 +0000 UTC m=+1298.677990296" watchObservedRunningTime="2026-02-16 15:15:04.608138488 +0000 UTC m=+1298.793115564" Feb 16 15:15:04 crc kubenswrapper[4705]: E0216 15:15:04.609032 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.609073 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: E0216 15:15:04.609128 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.609153 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.611038 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.611096 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.614176 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.617879 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717154 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.736465 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821638 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.829068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.844840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.847112 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.850091 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.855255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.860087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.949704 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.098663 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234986 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.235548 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.243536 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2" (OuterVolumeSpecName: "kube-api-access-cbvb2") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "kube-api-access-cbvb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.246494 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.340354 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.340423 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.420851 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.420852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerDied","Data":"bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7"} Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.421024 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7" Feb 16 15:15:05 crc kubenswrapper[4705]: E0216 15:15:05.522746 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85708f6_f2cb_4248_94e9_7c7763e88275.slice/crio-db3cd008d3efd4fa524bd570281b3d6c1ff70d241d540423b4cab74482c76e95\": RecentStats: unable to find data in memory cache]" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.562940 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.446046 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" path="/var/lib/kubelet/pods/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051/volumes" Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.517558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"16fc55e77902de5edf15730327b855ec3327bf7c048124bc8bfb673a6b5a034a"} Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.517673 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"db3cd008d3efd4fa524bd570281b3d6c1ff70d241d540423b4cab74482c76e95"} Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.340667 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.429937 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.535324 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"44abc1a158aec0b2637d9395912561ba28ea8b4333dc68c92cb6e190ad00ba6d"} Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.577810 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.577778232 podStartE2EDuration="3.577778232s" podCreationTimestamp="2026-02-16 15:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:07.565119826 +0000 UTC m=+1301.750096902" watchObservedRunningTime="2026-02-16 15:15:07.577778232 +0000 UTC m=+1301.762755308" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.624503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719432 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" containerID="cri-o://40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" gracePeriod=30 Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719998 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" containerID="cri-o://b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" gracePeriod=30 Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.916857 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.204671 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:08 crc kubenswrapper[4705]: E0216 15:15:08.205889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.205910 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.206276 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.207347 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211516 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211596 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211998 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-j8dj6" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.220827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377263 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.378132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.488949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489137 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.490690 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.506349 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.506488 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.510518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.526876 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.573247 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" exitCode=143 Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.574621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.170971 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.586343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4881941b-eb71-45be-aa51-0e8431b29e89","Type":"ContainerStarted","Data":"57de06ba06664890884a054c4865cc4af2645844c7ee8d8f5de3a66e901861ed"} Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.951006 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:15:10 crc kubenswrapper[4705]: I0216 15:15:10.917200 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:40492->10.217.0.199:9311: read: connection reset by peer" Feb 16 15:15:10 crc kubenswrapper[4705]: I0216 15:15:10.917310 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:40488->10.217.0.199:9311: read: connection reset by peer" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.507310 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514389 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514452 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514482 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.516031 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs" (OuterVolumeSpecName: "logs") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.524208 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn" (OuterVolumeSpecName: "kube-api-access-gbdxn") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "kube-api-access-gbdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.616638 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.616721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.617132 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.617153 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.629441 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.630154 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data" (OuterVolumeSpecName: "config-data") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642330 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" exitCode=0 Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642418 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642461 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"494073a82ffb15d51ca9ccf70ddd818083ecfa9ff2e728289031a38cb377d7c0"} Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642488 4705 scope.go:117] "RemoveContainer" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642708 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.658231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.685047 4705 scope.go:117] "RemoveContainer" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.709023 4705 scope.go:117] "RemoveContainer" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: E0216 15:15:11.710173 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": container with ID starting with b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4 not found: ID does not exist" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.710222 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} err="failed to get container status \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": rpc error: code = NotFound desc = could not find container \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": container with ID starting with b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4 not found: ID does not exist" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.710255 4705 scope.go:117] "RemoveContainer" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: E0216 15:15:11.711066 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": container with ID starting with 40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6 not found: ID does not exist" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.711084 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} err="failed to get container status \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": rpc error: code = NotFound desc = could not find container \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": container with ID starting with 40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6 not found: ID does not exist" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721264 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721329 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721340 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.058066 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.084197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.445647 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" path="/var/lib/kubelet/pods/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6/volumes" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.799111 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:13 crc kubenswrapper[4705]: E0216 15:15:13.799969 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.799987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: E0216 15:15:13.800008 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.800015 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.800287 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.801514 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.803520 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.805592 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.811240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.811246 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.819538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891586 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891689 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891778 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993850 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993902 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994201 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994239 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.995906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.997097 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.003168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.007009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.010067 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.010708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.020423 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.027863 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.167545 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.815518 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.253711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.755742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"b2e44d42e6591bd938539ffa069132e365bc1444be32785ab2c8624355e7c642"} Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.756232 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"e268c748fcdb24ba71ce2f7ff09d912ad538b7890dc0d84a77ac934ded34dee4"} Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.734639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.736936 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.760110 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.855479 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.857507 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.877355 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.879331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.889193 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.891578 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.891852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.892102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.959228 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002108 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002331 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.007868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.007997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.008650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.008930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.042885 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.088713 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.099479 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.101586 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.115767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.115924 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.116286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.116526 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.118724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.119534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.131820 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.140388 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.147397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.174494 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.176606 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.187942 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.191924 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.198780 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.227214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.228571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.229772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.239144 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.239415 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.324922 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.326904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.331933 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.349064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.349963 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.350501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.350626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.352122 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.352540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.353223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.373648 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.379220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.394987 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7v2x2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.397016 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.397777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.404428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.442213 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.443624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454413 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.558876 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559646 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559983 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.560584 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.560983 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.561675 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.573689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.574973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.578880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.582144 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.592169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.600171 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.612699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.616357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.627574 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.647676 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.649667 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.666788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667063 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.670108 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.690549 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.693805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.699358 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.700530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.707453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.745998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.769031 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777465 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777652 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777684 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777782 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777821 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.778129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.779091 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.779875 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780281 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780987 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.781319 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.781327 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.805432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.811685 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" exitCode=137 Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.811730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de"} Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.858224 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883700 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883755 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.887483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.888498 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.891702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.893056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.894968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.895649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.902207 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.904128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.930717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.989854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:18 crc kubenswrapper[4705]: I0216 15:15:18.079896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:22 crc kubenswrapper[4705]: I0216 15:15:22.232167 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.185:3000/\": dial tcp 10.217.0.185:3000: connect: connection refused" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.470468 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.472950 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.510813 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.512608 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.547421 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594396 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.625348 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.627411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.661637 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.692415 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702643 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.703045 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.703164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.721027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.722132 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.740266 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.756955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807628 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807654 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807700 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807817 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.821247 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.823905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.824793 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.830242 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.872149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.910616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.915065 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.939789 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.940551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.942228 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.013346 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.165387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.296939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.417684 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.418049 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d799457-fvqj6" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" containerID="cri-o://b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" gracePeriod=30 Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.418495 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d799457-fvqj6" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" containerID="cri-o://338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" gracePeriod=30 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.010870 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerID="338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" exitCode=0 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.010974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37"} Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.017235 4705 generic.go:334] "Generic (PLEG): container finished" podID="69bc6a88-b325-43bd-af4c-55283723a765" containerID="baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" exitCode=137 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.017303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac"} Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.222533 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.267538 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.269618 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.273222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.273413 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.292630 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.319575 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357601 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357771 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357959 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.359045 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.361166 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.364938 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.368558 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.382732 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460476 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460669 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460713 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460750 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460789 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460819 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460937 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.466049 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.466161 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.471270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.475536 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.477307 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.477646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.481602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.496733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563885 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563995 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564193 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.568071 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.568542 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.576788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.578077 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.585011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.585113 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.586405 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.587758 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.644267 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.694389 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.696752 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.696949 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h86h677h666h84h66fh648h59ch64fh7ch56dh5d7h5d5h699h75h5bfh644h6bh64dh564h5b6h55ch64h7dh676h66bh5f4h549h9fh5d4h5d9h596q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwnzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(4881941b-eb71-45be-aa51-0e8431b29e89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.698166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="4881941b-eb71-45be-aa51-0e8431b29e89" Feb 16 15:15:27 crc kubenswrapper[4705]: E0216 15:15:27.170325 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="4881941b-eb71-45be-aa51-0e8431b29e89" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.476570 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624748 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624801 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624933 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.625069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.625137 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.626650 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.634593 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.654772 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk" (OuterVolumeSpecName: "kube-api-access-g4tkk") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "kube-api-access-g4tkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.655703 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts" (OuterVolumeSpecName: "scripts") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.766858 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767226 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767237 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767246 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.994578 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.000258 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.051637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.073795 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data" (OuterVolumeSpecName: "config-data") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078138 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078164 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078175 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.153599 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7"} Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.153645 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.184660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"1f91f91f4ee1690f46dee7379d3b5f6f9664f4c57d16ad81e7ef1f99a61e9417"} Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.184743 4705 scope.go:117] "RemoveContainer" containerID="ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.185019 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.242275 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.460299 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.491608 4705 scope.go:117] "RemoveContainer" containerID="7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.530896 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.554010 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568103 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568671 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568692 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568706 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568713 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568729 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568735 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568744 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568750 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568763 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568769 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569073 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569089 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569106 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569120 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569135 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.571475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.576152 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.576541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600676 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600854 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600932 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601034 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601061 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601103 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601684 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.605022 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs" (OuterVolumeSpecName: "logs") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.608165 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.640114 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw" (OuterVolumeSpecName: "kube-api-access-s2dbw") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "kube-api-access-s2dbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.642672 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.642716 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts" (OuterVolumeSpecName: "scripts") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.696204 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709182 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709196 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709206 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709215 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709222 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.735609 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data" (OuterVolumeSpecName: "config-data") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813573 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814538 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.815084 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.815171 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.826450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.833127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.857974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.859141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.865720 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.000327 4705 scope.go:117] "RemoveContainer" containerID="9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.038872 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.223415 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerStarted","Data":"624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.223877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerStarted","Data":"0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.245340 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" podStartSLOduration=12.245318184 podStartE2EDuration="12.245318184s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:29.239182481 +0000 UTC m=+1323.424159547" watchObservedRunningTime="2026-02-16 15:15:29.245318184 +0000 UTC m=+1323.430295250" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.250607 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.255465 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"2de6afc52b4fb109681f7676f68a992bbdf998d962c01b0b50469249fc69a1c3"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.256193 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.256416 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.282496 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.307599 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85b76884b7-g4c57" podStartSLOduration=16.307577685 podStartE2EDuration="16.307577685s" podCreationTimestamp="2026-02-16 15:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:29.287550512 +0000 UTC m=+1323.472527598" watchObservedRunningTime="2026-02-16 15:15:29.307577685 +0000 UTC m=+1323.492554761" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.365079 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.388925 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.413456 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.436857 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.445303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.445605 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.484191 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.578044 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.585773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593940 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.594002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.594213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698005 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698630 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698648 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.699239 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.701455 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.710288 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.715303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.727753 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.728414 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.731681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.747210 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.771283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.817261 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.901162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.981681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.983060 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.009328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.035285 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.061099 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.273067 4705 generic.go:334] "Generic (PLEG): container finished" podID="38af35f6-7590-41c4-9442-ec89fe02106f" containerID="624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce" exitCode=0 Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.273205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerDied","Data":"624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.275153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerStarted","Data":"0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.280159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerStarted","Data":"07359fe7b9cf7c5f1d493c117441af14d97a55cf8e7e896736d82451018cdca8"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.283889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerStarted","Data":"0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.288760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerStarted","Data":"c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.296526 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.483518 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bc6a88-b325-43bd-af4c-55283723a765" path="/var/lib/kubelet/pods/69bc6a88-b325-43bd-af4c-55283723a765/volumes" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.485164 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" path="/var/lib/kubelet/pods/b1b8bc91-daf7-4fa0-aad2-7d14527c2298/volumes" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.733154 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.788718 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:30 crc kubenswrapper[4705]: W0216 15:15:30.806588 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951d407e_26bd_442f_8519_61650a9a3e70.slice/crio-5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34 WatchSource:0}: Error finding container 5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34: Status 404 returned error can't find the container with id 5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34 Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.919330 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.111234 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.154153 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.173174 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.189283 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.219428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.222006 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.240385 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.258191 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:31 crc kubenswrapper[4705]: W0216 15:15:31.287026 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6760289c_b8a9_45ed_bbab_3d5d5ca1db17.slice/crio-3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54 WatchSource:0}: Error finding container 3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54: Status 404 returned error can't find the container with id 3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54 Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.380276 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.397694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerStarted","Data":"b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.433799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b6d6849b-79456" event={"ID":"94fb430a-807d-4e37-bc5a-9b4c75454427","Type":"ContainerStarted","Data":"2cdb5dc66bc2ee90d7d5c23d3d6d7ca813c990a20ae4c3e03b2ace84b86330ed"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.438221 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" podStartSLOduration=14.438192233 podStartE2EDuration="14.438192233s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:31.421538105 +0000 UTC m=+1325.606515191" watchObservedRunningTime="2026-02-16 15:15:31.438192233 +0000 UTC m=+1325.623169309" Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.442843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7bf99b56-hm6dc" event={"ID":"ada71f46-f923-4974-9776-ed92f20c79b1","Type":"ContainerStarted","Data":"67ef71715a23c912ae4fd99be1d097d8e372f553b92cd63cf628172082ac6f24"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.445269 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerStarted","Data":"6acd1944658746507adf3b4af992bae06e651f8bf8b1f5ec60b84795bec2d1f1"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.455573 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerStarted","Data":"a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.457853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerStarted","Data":"446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.460418 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerStarted","Data":"5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.475440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerStarted","Data":"fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.484193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7986669c9b-q8ghv" event={"ID":"08b1576e-92c8-407b-b821-e0cbfe1be11a","Type":"ContainerStarted","Data":"e5055a703a591476917dfc6fbbf1aef43e5b8b8aba57c1130df721992b50defe"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.486263 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerStarted","Data":"00f8e5fe522e813566a78b6896b44d2c17e83898b0bbb39385052b0a457034e8"} Feb 16 15:15:31 crc kubenswrapper[4705]: W0216 15:15:31.487917 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09b351a_8da4_4f00_8847_f3461478179f.slice/crio-7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e WatchSource:0}: Error finding container 7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e: Status 404 returned error can't find the container with id 7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.488528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerStarted","Data":"75b8ea33afa2dc74710b8197cd60788f65dd6c58802ff69550dde775ef900e97"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.525950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerStarted","Data":"e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.555051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerStarted","Data":"7eed159df357b814d8fe77b30f4e632478a311f8b770660151ac4fae245b6428"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.687586 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.687658 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.597557 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerStarted","Data":"5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.609010 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerStarted","Data":"8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.612238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.674416 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-mqnvt" podStartSLOduration=16.674387679 podStartE2EDuration="16.674387679s" podCreationTimestamp="2026-02-16 15:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.63780355 +0000 UTC m=+1326.822780626" watchObservedRunningTime="2026-02-16 15:15:32.674387679 +0000 UTC m=+1326.859364845" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.682554 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7bf99b56-hm6dc" event={"ID":"ada71f46-f923-4974-9776-ed92f20c79b1","Type":"ContainerStarted","Data":"96d51637e1959c093b9c48d9015dbec840c3b58132f3e7055cb5c1b21ca999c1"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.685396 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.689697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerDied","Data":"0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.689740 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.690620 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-x6wr8" podStartSLOduration=16.690595885 podStartE2EDuration="16.690595885s" podCreationTimestamp="2026-02-16 15:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.656967229 +0000 UTC m=+1326.841944305" watchObservedRunningTime="2026-02-16 15:15:32.690595885 +0000 UTC m=+1326.875572961" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.707752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.736467 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c6fc941-1576-4817-859a-6644349bc8cd" containerID="e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.736579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerDied","Data":"e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.762521 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b7bf99b56-hm6dc" podStartSLOduration=8.762467896 podStartE2EDuration="8.762467896s" podCreationTimestamp="2026-02-16 15:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.719521428 +0000 UTC m=+1326.904498504" watchObservedRunningTime="2026-02-16 15:15:32.762467896 +0000 UTC m=+1326.947444972" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.773683 4705 generic.go:334] "Generic (PLEG): container finished" podID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerID="b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.774082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerDied","Data":"b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.809857 4705 generic.go:334] "Generic (PLEG): container finished" podID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerID="fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.809916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerDied","Data":"fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.825130 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.974584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"38af35f6-7590-41c4-9442-ec89fe02106f\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.975526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"38af35f6-7590-41c4-9442-ec89fe02106f\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.976445 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38af35f6-7590-41c4-9442-ec89fe02106f" (UID: "38af35f6-7590-41c4-9442-ec89fe02106f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.983481 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.998285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv" (OuterVolumeSpecName: "kube-api-access-84rlv") pod "38af35f6-7590-41c4-9442-ec89fe02106f" (UID: "38af35f6-7590-41c4-9442-ec89fe02106f"). InnerVolumeSpecName "kube-api-access-84rlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.086731 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.550512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.579002 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"3c6fc941-1576-4817-859a-6644349bc8cd\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"c18d067a-2ef1-4b11-936f-aef7f7910a80\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615683 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"3c6fc941-1576-4817-859a-6644349bc8cd\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"c18d067a-2ef1-4b11-936f-aef7f7910a80\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.618869 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c18d067a-2ef1-4b11-936f-aef7f7910a80" (UID: "c18d067a-2ef1-4b11-936f-aef7f7910a80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.619192 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c6fc941-1576-4817-859a-6644349bc8cd" (UID: "3c6fc941-1576-4817-859a-6644349bc8cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.621279 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.621330 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.625926 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf" (OuterVolumeSpecName: "kube-api-access-k5swf") pod "3c6fc941-1576-4817-859a-6644349bc8cd" (UID: "3c6fc941-1576-4817-859a-6644349bc8cd"). InnerVolumeSpecName "kube-api-access-k5swf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.625987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q" (OuterVolumeSpecName: "kube-api-access-qzb7q") pod "c18d067a-2ef1-4b11-936f-aef7f7910a80" (UID: "c18d067a-2ef1-4b11-936f-aef7f7910a80"). InnerVolumeSpecName "kube-api-access-qzb7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.725511 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.726662 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.882260 4705 generic.go:334] "Generic (PLEG): container finished" podID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerID="5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.882703 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerDied","Data":"5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.903711 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b468686-b5ab-423d-a720-a2c77aed457f" containerID="8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.903789 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerDied","Data":"8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerDied","Data":"c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909565 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909636 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.918060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerStarted","Data":"5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.918498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.931563 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.935448 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"02a5e2d5d7e31d67b1bb7a3cbeb3d323b8ed9573be1a3ec02ee106bca3ba399c"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.946766 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerID="af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.946900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.959705 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.959945 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.969747 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerDied","Data":"0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.969794 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.967738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b7cc9557b-77tq2" podStartSLOduration=16.967716781 podStartE2EDuration="16.967716781s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:33.946972258 +0000 UTC m=+1328.131949324" watchObservedRunningTime="2026-02-16 15:15:33.967716781 +0000 UTC m=+1328.152693857" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.178867 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.541340 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.781263 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.783024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.875458 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.876209 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-565b84d684-sh8jq" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" containerID="cri-o://72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.876489 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-565b84d684-sh8jq" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" containerID="cri-o://0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" gracePeriod=30 Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.043793 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"d6362dbef86cbcd19bf87413815374f6225932e1d1a905780b0f3d66245836a1"} Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.043945 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.082393 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.082350787 podStartE2EDuration="6.082350787s" podCreationTimestamp="2026-02-16 15:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:35.069338081 +0000 UTC m=+1329.254315157" watchObservedRunningTime="2026-02-16 15:15:35.082350787 +0000 UTC m=+1329.267327863" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.048005 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.050656 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.052025 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerDied","Data":"0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066426 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.074265 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerDied","Data":"a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.074323 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.075854 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.106310 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.107407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerDied","Data":"446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.107515 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.140512 4705 generic.go:334] "Generic (PLEG): container finished" podID="8486800f-2aec-490d-a174-e05a0fa27a62" containerID="0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" exitCode=143 Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.140655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.148182 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerID="b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" exitCode=0 Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.148393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.169295 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.169999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.170237 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.170439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"8b468686-b5ab-423d-a720-a2c77aed457f\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.176723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.176906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"8b468686-b5ab-423d-a720-a2c77aed457f\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.171163 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a0302cb-f7dd-46d4-8df0-2ab25bddec10" (UID: "6a0302cb-f7dd-46d4-8df0-2ab25bddec10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.171197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b2a0a9c-1379-457e-a5e2-537304cfdcff" (UID: "7b2a0a9c-1379-457e-a5e2-537304cfdcff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.172630 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b468686-b5ab-423d-a720-a2c77aed457f" (UID: "8b468686-b5ab-423d-a720-a2c77aed457f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.181596 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9" (OuterVolumeSpecName: "kube-api-access-72tq9") pod "6a0302cb-f7dd-46d4-8df0-2ab25bddec10" (UID: "6a0302cb-f7dd-46d4-8df0-2ab25bddec10"). InnerVolumeSpecName "kube-api-access-72tq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185462 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185491 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185503 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185542 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.198063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n" (OuterVolumeSpecName: "kube-api-access-b5d9n") pod "8b468686-b5ab-423d-a720-a2c77aed457f" (UID: "8b468686-b5ab-423d-a720-a2c77aed457f"). InnerVolumeSpecName "kube-api-access-b5d9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.200883 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm" (OuterVolumeSpecName: "kube-api-access-r85gm") pod "7b2a0a9c-1379-457e-a5e2-537304cfdcff" (UID: "7b2a0a9c-1379-457e-a5e2-537304cfdcff"). InnerVolumeSpecName "kube-api-access-r85gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.288330 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.291016 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.204255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b"} Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.204967 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.263446 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.452593 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.453091 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.456690 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.456931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.457921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.457975 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.458022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.579549 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq" (OuterVolumeSpecName: "kube-api-access-hdqbq") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "kube-api-access-hdqbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.670201 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.795827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config" (OuterVolumeSpecName: "config") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.874386 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.882995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.976038 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.221241 4705 generic.go:334] "Generic (PLEG): container finished" podID="8486800f-2aec-490d-a174-e05a0fa27a62" containerID="72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" exitCode=0 Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.224822 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.222408 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79"} Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.381651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.389113 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.400606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.406995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.498104 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.498149 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.508545 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.607630 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.697145 4705 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5639f9d_2d22_47cb_b481_10e88dc7f90f.slice/cgroup.controllers: no such file or directory" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.697237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.775420 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.799970 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.276916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7986669c9b-q8ghv" event={"ID":"08b1576e-92c8-407b-b821-e0cbfe1be11a","Type":"ContainerStarted","Data":"9d643e2db80bd365b8f950c7dece546e6ce638bc7851f64c39e67ef4e3b8f204"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.278493 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.287255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.297779 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.297843 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.308204 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7986669c9b-q8ghv" podStartSLOduration=7.27932802 podStartE2EDuration="13.308173429s" podCreationTimestamp="2026-02-16 15:15:26 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.125630893 +0000 UTC m=+1325.310607959" lastFinishedPulling="2026-02-16 15:15:37.154476292 +0000 UTC m=+1331.339453368" observedRunningTime="2026-02-16 15:15:39.302040207 +0000 UTC m=+1333.487017283" watchObservedRunningTime="2026-02-16 15:15:39.308173429 +0000 UTC m=+1333.493150505" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.317560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerStarted","Data":"895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.317788 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" containerID="cri-o://895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" gracePeriod=60 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.318122 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.329680 4705 generic.go:334] "Generic (PLEG): container finished" podID="951d407e-26bd-442f-8519-61650a9a3e70" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" exitCode=1 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.329803 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.330815 4705 scope.go:117] "RemoveContainer" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.357334 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" podStartSLOduration=15.305298969 podStartE2EDuration="22.357307821s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="2026-02-16 15:15:29.97164389 +0000 UTC m=+1324.156620966" lastFinishedPulling="2026-02-16 15:15:37.023652752 +0000 UTC m=+1331.208629818" observedRunningTime="2026-02-16 15:15:39.356720744 +0000 UTC m=+1333.541697820" watchObservedRunningTime="2026-02-16 15:15:39.357307821 +0000 UTC m=+1333.542284897" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.358427 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerStarted","Data":"2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.359809 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.399649 4705 generic.go:334] "Generic (PLEG): container finished" podID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" exitCode=1 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.400330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.400873 4705 scope.go:117] "RemoveContainer" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.417653 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b6d6849b-79456" event={"ID":"94fb430a-807d-4e37-bc5a-9b4c75454427","Type":"ContainerStarted","Data":"25b842f93c5831708a88045242a40db722a4a7e440b2718e345b48d4f563a393"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.420045 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.435621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerStarted","Data":"6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.435963 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-656d9cf494-c6m8t" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" containerID="cri-o://6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" gracePeriod=60 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.436067 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.456091 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" podStartSLOduration=22.456067198 podStartE2EDuration="22.456067198s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:39.429313306 +0000 UTC m=+1333.614290382" watchObservedRunningTime="2026-02-16 15:15:39.456067198 +0000 UTC m=+1333.641044274" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.574665 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65b6d6849b-79456" podStartSLOduration=7.68288107 podStartE2EDuration="13.574641803s" podCreationTimestamp="2026-02-16 15:15:26 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.262026419 +0000 UTC m=+1325.447003495" lastFinishedPulling="2026-02-16 15:15:37.153787152 +0000 UTC m=+1331.338764228" observedRunningTime="2026-02-16 15:15:39.550242447 +0000 UTC m=+1333.735219523" watchObservedRunningTime="2026-02-16 15:15:39.574641803 +0000 UTC m=+1333.759618869" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.616603 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-656d9cf494-c6m8t" podStartSLOduration=16.499687728 podStartE2EDuration="22.616572092s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.025565619 +0000 UTC m=+1325.210542685" lastFinishedPulling="2026-02-16 15:15:37.142449973 +0000 UTC m=+1331.327427049" observedRunningTime="2026-02-16 15:15:39.589600334 +0000 UTC m=+1333.774577420" watchObservedRunningTime="2026-02-16 15:15:39.616572092 +0000 UTC m=+1333.801549158" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.652048 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760693 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760843 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761125 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761163 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761221 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.763100 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs" (OuterVolumeSpecName: "logs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.775583 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726" (OuterVolumeSpecName: "kube-api-access-58726") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "kube-api-access-58726". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.780535 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts" (OuterVolumeSpecName: "scripts") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865302 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865335 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865349 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.015519 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.015597 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.166199 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.166274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.415449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data" (OuterVolumeSpecName: "config-data") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.444478 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" path="/var/lib/kubelet/pods/f5639f9d-2d22-47cb-b481-10e88dc7f90f/volumes" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.467467 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.507420 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.530286 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.612394 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.691557 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.717085 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.841797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5"} Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.916761 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.948004 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.179317 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.214700 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.518046 4705 generic.go:334] "Generic (PLEG): container finished" podID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerID="895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" exitCode=0 Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.518180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerDied","Data":"895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.547456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerStarted","Data":"b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.549273 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.587819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerStarted","Data":"ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.588831 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:41 crc kubenswrapper[4705]: E0216 15:15:41.589138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.597049 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podStartSLOduration=11.430081625 podStartE2EDuration="17.597027168s" podCreationTimestamp="2026-02-16 15:15:24 +0000 UTC" firstStartedPulling="2026-02-16 15:15:30.984618557 +0000 UTC m=+1325.169595633" lastFinishedPulling="2026-02-16 15:15:37.1515641 +0000 UTC m=+1331.336541176" observedRunningTime="2026-02-16 15:15:41.581428649 +0000 UTC m=+1335.766405725" watchObservedRunningTime="2026-02-16 15:15:41.597027168 +0000 UTC m=+1335.782004244" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.651758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4881941b-eb71-45be-aa51-0e8431b29e89","Type":"ContainerStarted","Data":"d7b0a7eaf9b72e98b057d054d77c8d71885c3f7b2e49f0439793a568ebfdd2d8"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.698644 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.907486286 podStartE2EDuration="33.698622355s" podCreationTimestamp="2026-02-16 15:15:08 +0000 UTC" firstStartedPulling="2026-02-16 15:15:09.178429717 +0000 UTC m=+1303.363406803" lastFinishedPulling="2026-02-16 15:15:38.969565796 +0000 UTC m=+1333.154542872" observedRunningTime="2026-02-16 15:15:41.690206008 +0000 UTC m=+1335.875183094" watchObservedRunningTime="2026-02-16 15:15:41.698622355 +0000 UTC m=+1335.883599421" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.064667 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237433 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237562 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.238147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.249717 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc" (OuterVolumeSpecName: "kube-api-access-9tvdc") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "kube-api-access-9tvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.251526 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.342546 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.342581 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.353514 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data" (OuterVolumeSpecName: "config-data") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.365336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.432168 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" path="/var/lib/kubelet/pods/8486800f-2aec-490d-a174-e05a0fa27a62/volumes" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.447233 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.447266 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.530581 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533022 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533542 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533666 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533748 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533851 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533935 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534026 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534111 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534220 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534320 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534508 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534597 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534852 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534939 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535039 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535125 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535207 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535300 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535410 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535524 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535607 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536025 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536114 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536203 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536295 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536402 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536624 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536709 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536800 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536882 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536973 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.538326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.543858 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.544123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.544384 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.552496 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553354 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.663985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664060 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.672899 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.677576 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.679754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680271 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerDied","Data":"07359fe7b9cf7c5f1d493c117441af14d97a55cf8e7e896736d82451018cdca8"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680334 4705 scope.go:117] "RemoveContainer" containerID="895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680505 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.696568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.700096 4705 generic.go:334] "Generic (PLEG): container finished" podID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" exitCode=1 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.700762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.701791 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.702095 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.704982 4705 generic.go:334] "Generic (PLEG): container finished" podID="951d407e-26bd-442f-8519-61650a9a3e70" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" exitCode=1 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.705061 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.706861 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.707160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748008 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748228 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" containerID="cri-o://45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748576 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748634 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" containerID="cri-o://dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748696 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" containerID="cri-o://e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748746 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" containerID="cri-o://a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.800285 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.354199884 podStartE2EDuration="14.800259306s" podCreationTimestamp="2026-02-16 15:15:28 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.300523982 +0000 UTC m=+1325.485501048" lastFinishedPulling="2026-02-16 15:15:41.746583394 +0000 UTC m=+1335.931560470" observedRunningTime="2026-02-16 15:15:42.792362584 +0000 UTC m=+1336.977339660" watchObservedRunningTime="2026-02-16 15:15:42.800259306 +0000 UTC m=+1336.985236382" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.860012 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.871397 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.879480 4705 scope.go:117] "RemoveContainer" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.883541 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.977666 4705 scope.go:117] "RemoveContainer" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.653121 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763413 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" exitCode=0 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763448 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" exitCode=2 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763456 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" exitCode=0 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763493 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.765123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerStarted","Data":"f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.769943 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:43 crc kubenswrapper[4705]: E0216 15:15:43.770451 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.771543 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:43 crc kubenswrapper[4705]: E0216 15:15:43.771816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.443508 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" path="/var/lib/kubelet/pods/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca/volumes" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.828522 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d09b351a-8da4-4f00-8847-f3461478179f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.226:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.854718 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" exitCode=0 Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.854772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109"} Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.885653 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.016604 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.017791 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.018147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.021626 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.021867 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" containerID="cri-o://5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" gracePeriod=60 Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.064816 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.066705 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.087900 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.087972 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.165835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.166941 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.167195 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.168747 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.562217 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679752 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679802 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.680023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.680172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.681025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.681233 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.690622 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts" (OuterVolumeSpecName: "scripts") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.690706 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v" (OuterVolumeSpecName: "kube-api-access-dbr2v") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "kube-api-access-dbr2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.767620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783673 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783709 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783720 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783732 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783744 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.871200 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data" (OuterVolumeSpecName: "config-data") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.884855 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.885469 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.886204 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54"} Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.886249 4705 scope.go:117] "RemoveContainer" containerID="dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.887817 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.890074 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.890109 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.890078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.997971 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.016401 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.022843 4705 scope.go:117] "RemoveContainer" containerID="e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034738 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034756 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034790 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034808 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034818 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034831 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034837 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035065 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035088 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035103 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035120 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.037227 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.040216 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.040500 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.051203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.091801 4705 scope.go:117] "RemoveContainer" containerID="a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.126079 4705 scope.go:117] "RemoveContainer" containerID="45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206817 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.207158 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.207217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311344 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311393 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311558 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311580 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.312941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.314210 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.317973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.319202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.321736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.329496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.337069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.356633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.438832 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" path="/var/lib/kubelet/pods/6760289c-b8a9-45ed-bbab-3d5d5ca1db17/volumes" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.952929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.867700 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.872940 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.878757 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.878804 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.930776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.931176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"0b27ef6e89e2cf0aaae157a2376b147fb79e694acf057c4989514c1f299a5941"} Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.933263 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.029427 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.029837 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" containerID="cri-o://e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" gracePeriod=10 Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.405864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.068011 4705 generic.go:334] "Generic (PLEG): container finished" podID="541411df-f636-4dab-a4e2-2ecc8933f236" containerID="e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" exitCode=0 Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.068075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1"} Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.123358 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.303945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304288 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304428 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304678 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.322390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv" (OuterVolumeSpecName: "kube-api-access-fkbhv") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "kube-api-access-fkbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.391215 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.407849 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.407892 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.446056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.450429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.503478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.503489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config" (OuterVolumeSpecName: "config") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512566 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512595 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512606 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512615 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.080833 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.091804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093260 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"4cd8d63ef6157fd647119bfab51e4fd5281201daf21b70697f5351220cfe9c1c"} Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093299 4705 scope.go:117] "RemoveContainer" containerID="e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.167005 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.183241 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.201649 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.259699 4705 scope.go:117] "RemoveContainer" containerID="40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.380106 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.456510 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" path="/var/lib/kubelet/pods/541411df-f636-4dab-a4e2-2ecc8933f236/volumes" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.457325 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.898877 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.054434 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055065 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055168 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.070025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4" (OuterVolumeSpecName: "kube-api-access-265p4") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "kube-api-access-265p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.072197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.127860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161003 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"7eed159df357b814d8fe77b30f4e632478a311f8b770660151ac4fae245b6428"} Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161344 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161499 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162051 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162077 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162089 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.193579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.235255 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.276838 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data" (OuterVolumeSpecName: "config-data") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.378780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.378965 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.379149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.379499 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.380898 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.413574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.423333 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.423860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2" (OuterVolumeSpecName: "kube-api-access-xwfs2") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "kube-api-access-xwfs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.485472 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.485806 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.509737 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.546512 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data" (OuterVolumeSpecName: "config-data") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.560414 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.576703 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.588894 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.589307 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.236970 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.239138 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34"} Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255102 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255222 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.284101 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.606394167 podStartE2EDuration="7.284076437s" podCreationTimestamp="2026-02-16 15:15:45 +0000 UTC" firstStartedPulling="2026-02-16 15:15:46.976511274 +0000 UTC m=+1341.161488350" lastFinishedPulling="2026-02-16 15:15:51.654193544 +0000 UTC m=+1345.839170620" observedRunningTime="2026-02-16 15:15:52.26389577 +0000 UTC m=+1346.448872846" watchObservedRunningTime="2026-02-16 15:15:52.284076437 +0000 UTC m=+1346.469053513" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.391942 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.440859 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" path="/var/lib/kubelet/pods/59b661f8-8d2f-45db-ab8d-cd6436cec8eb/volumes" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.441721 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:53 crc kubenswrapper[4705]: I0216 15:15:53.872832 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.287258 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" exitCode=0 Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.287560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerDied","Data":"5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525"} Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.457962 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951d407e-26bd-442f-8519-61650a9a3e70" path="/var/lib/kubelet/pods/951d407e-26bd-442f-8519-61650a9a3e70/volumes" Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298785 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" containerID="cri-o://0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.299181 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" containerID="cri-o://ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298894 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" containerID="cri-o://fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298852 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" containerID="cri-o://182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" gracePeriod=30 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314037 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" exitCode=0 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314097 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" exitCode=2 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314113 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" exitCode=0 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.860067 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861248 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861723 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861761 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.695818 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.696442 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.696647 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.698725 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.698797 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" gracePeriod=600 Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.260986 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.326931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327130 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.344586 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.350123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v" (OuterVolumeSpecName: "kube-api-access-znx6v") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "kube-api-access-znx6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.398510 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerStarted","Data":"85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.422097 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" exitCode=0 Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.440811 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.440862 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.444436 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.468199 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" podStartSLOduration=2.63658934 podStartE2EDuration="20.468169443s" podCreationTimestamp="2026-02-16 15:15:42 +0000 UTC" firstStartedPulling="2026-02-16 15:15:43.647342269 +0000 UTC m=+1337.832319345" lastFinishedPulling="2026-02-16 15:16:01.478922372 +0000 UTC m=+1355.663899448" observedRunningTime="2026-02-16 15:16:02.437327406 +0000 UTC m=+1356.622304482" watchObservedRunningTime="2026-02-16 15:16:02.468169443 +0000 UTC m=+1356.653146519" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.585765 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.586112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerDied","Data":"00f8e5fe522e813566a78b6896b44d2c17e83898b0bbb39385052b0a457034e8"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.586160 4705 scope.go:117] "RemoveContainer" containerID="de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.590278 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.626064 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data" (OuterVolumeSpecName: "config-data") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.688598 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.688648 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.719577 4705 scope.go:117] "RemoveContainer" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.796217 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.813455 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:16:03 crc kubenswrapper[4705]: I0216 15:16:03.475660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.436767 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" path="/var/lib/kubelet/pods/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa/volumes" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.475996 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.518643 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" exitCode=0 Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.520576 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"0b27ef6e89e2cf0aaae157a2376b147fb79e694acf057c4989514c1f299a5941"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521838 4705 scope.go:117] "RemoveContainer" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.558914 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.558978 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559044 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559138 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559918 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.560175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.568304 4705 scope.go:117] "RemoveContainer" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.589390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts" (OuterVolumeSpecName: "scripts") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.604495 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr" (OuterVolumeSpecName: "kube-api-access-w2tgr") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "kube-api-access-w2tgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.626153 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.661554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.662989 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663013 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663023 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663033 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663042 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.722086 4705 scope.go:117] "RemoveContainer" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.758551 4705 scope.go:117] "RemoveContainer" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.781019 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data podName:3a635e46-1a87-4961-8a11-8c3c7d7adbd1 nodeName:}" failed. No retries permitted until 2026-02-16 15:16:05.280975389 +0000 UTC m=+1359.465952465 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1") : error deleting /var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volume-subpaths: remove /var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volume-subpaths: no such file or directory Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.790443 4705 scope.go:117] "RemoveContainer" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.792780 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": container with ID starting with fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8 not found: ID does not exist" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.792833 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} err="failed to get container status \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": rpc error: code = NotFound desc = could not find container \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": container with ID starting with fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8 not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.792867 4705 scope.go:117] "RemoveContainer" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.793263 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": container with ID starting with 182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b not found: ID does not exist" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793383 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} err="failed to get container status \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": rpc error: code = NotFound desc = could not find container \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": container with ID starting with 182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793472 4705 scope.go:117] "RemoveContainer" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.793900 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": container with ID starting with ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a not found: ID does not exist" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793990 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} err="failed to get container status \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": rpc error: code = NotFound desc = could not find container \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": container with ID starting with ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.794090 4705 scope.go:117] "RemoveContainer" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.794460 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.797348 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": container with ID starting with 0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537 not found: ID does not exist" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.797428 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} err="failed to get container status \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": rpc error: code = NotFound desc = could not find container \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": container with ID starting with 0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537 not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.870715 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.284596 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.302754 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data" (OuterVolumeSpecName: "config-data") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.388717 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.458892 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.473439 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.521797 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522388 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522403 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522422 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="init" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522428 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="init" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522438 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522444 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522462 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522468 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522489 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522514 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522520 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522534 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522540 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522564 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522569 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522584 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522591 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522602 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522610 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522835 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522845 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522857 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522863 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522874 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522885 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522893 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522908 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522921 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.523191 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.523203 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.523515 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.525984 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.528635 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.528869 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.555578 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698858 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802827 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802926 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.803463 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.803737 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.823462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.823493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.824204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.827399 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.832802 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.846665 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.441041 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" path="/var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volumes" Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.544594 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.580634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"71417d528440578012a0700050f75b1c04d4288adeeb4513729b50c1c01939e5"} Feb 16 15:16:07 crc kubenswrapper[4705]: I0216 15:16:07.595306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} Feb 16 15:16:08 crc kubenswrapper[4705]: I0216 15:16:08.610650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} Feb 16 15:16:09 crc kubenswrapper[4705]: I0216 15:16:09.623962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.651054 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.651628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.710070 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.536088173 podStartE2EDuration="6.710042067s" podCreationTimestamp="2026-02-16 15:16:05 +0000 UTC" firstStartedPulling="2026-02-16 15:16:06.572988891 +0000 UTC m=+1360.757965967" lastFinishedPulling="2026-02-16 15:16:10.746942785 +0000 UTC m=+1364.931919861" observedRunningTime="2026-02-16 15:16:11.685929638 +0000 UTC m=+1365.870906714" watchObservedRunningTime="2026-02-16 15:16:11.710042067 +0000 UTC m=+1365.895019143" Feb 16 15:16:16 crc kubenswrapper[4705]: I0216 15:16:16.716113 4705 generic.go:334] "Generic (PLEG): container finished" podID="06284688-bd14-48ff-adf1-d0dc441d1238" containerID="85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a" exitCode=0 Feb 16 15:16:16 crc kubenswrapper[4705]: I0216 15:16:16.716195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerDied","Data":"85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a"} Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.177443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346518 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346779 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346863 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.347019 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.354301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts" (OuterVolumeSpecName: "scripts") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.357501 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh" (OuterVolumeSpecName: "kube-api-access-rn9xh") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "kube-api-access-rn9xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.388320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data" (OuterVolumeSpecName: "config-data") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.389203 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449823 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449859 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449872 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449882 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerDied","Data":"f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409"} Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741598 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741685 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.897628 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:18 crc kubenswrapper[4705]: E0216 15:16:18.898203 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.898223 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.898515 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.899446 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.907259 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.907557 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.923976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073107 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176209 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176577 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.184883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.185145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.195793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.230281 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.767168 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.769677 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerStarted","Data":"d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299"} Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.770099 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerStarted","Data":"89773dd4ddcc151bb2dd44670cb30683011d6f4c21b57a0cc856f9fb0cb8aa40"} Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.770120 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.796631 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.811098 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.811074681 podStartE2EDuration="2.811074681s" podCreationTimestamp="2026-02-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:20.798668152 +0000 UTC m=+1374.983645228" watchObservedRunningTime="2026-02-16 15:16:20.811074681 +0000 UTC m=+1374.996051757" Feb 16 15:16:22 crc kubenswrapper[4705]: I0216 15:16:22.797675 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.188330 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189000 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" containerID="cri-o://a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189153 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" containerID="cri-o://a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189196 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" containerID="cri-o://89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189230 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" containerID="cri-o://aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.228753 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.229:3000/\": EOF" Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327116 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327462 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" containerID="cri-o://7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327619 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" containerID="cri-o://0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.810897 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" exitCode=143 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.810972 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814832 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" exitCode=0 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814869 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" exitCode=2 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814894 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.853801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.853681 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" exitCode=0 Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998063 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998456 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" containerID="cri-o://dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" gracePeriod=30 Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998593 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" containerID="cri-o://365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" gracePeriod=30 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.859129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.900626 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" exitCode=0 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.900925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.903877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"71417d528440578012a0700050f75b1c04d4288adeeb4513729b50c1c01939e5"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.903964 4705 scope.go:117] "RemoveContainer" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.904965 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.910201 4705 generic.go:334] "Generic (PLEG): container finished" podID="2678da20-6fd3-430b-8841-40842382c4fb" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" exitCode=143 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.910283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.914523 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.914589 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916835 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916883 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.917023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.918552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.918916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.922777 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts" (OuterVolumeSpecName: "scripts") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929604 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929660 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929676 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.953259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9" (OuterVolumeSpecName: "kube-api-access-vjgd9") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "kube-api-access-vjgd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.968663 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.013637 4705 scope.go:117] "RemoveContainer" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.037230 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.037271 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.042997 4705 scope.go:117] "RemoveContainer" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.092396 4705 scope.go:117] "RemoveContainer" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.092330 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.125233 4705 scope.go:117] "RemoveContainer" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.127501 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": container with ID starting with a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22 not found: ID does not exist" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.127552 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} err="failed to get container status \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": rpc error: code = NotFound desc = could not find container \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": container with ID starting with a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22 not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.127603 4705 scope.go:117] "RemoveContainer" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.128296 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": container with ID starting with 89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a not found: ID does not exist" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128379 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} err="failed to get container status \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": rpc error: code = NotFound desc = could not find container \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": container with ID starting with 89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128425 4705 scope.go:117] "RemoveContainer" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.128847 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": container with ID starting with aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac not found: ID does not exist" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128870 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} err="failed to get container status \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": rpc error: code = NotFound desc = could not find container \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": container with ID starting with aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128889 4705 scope.go:117] "RemoveContainer" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.129133 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": container with ID starting with a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3 not found: ID does not exist" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.129155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} err="failed to get container status \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": rpc error: code = NotFound desc = could not find container \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": container with ID starting with a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3 not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.135175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data" (OuterVolumeSpecName: "config-data") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.139513 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.139544 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.264314 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.277552 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.300717 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301283 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301302 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301332 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301342 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301383 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301390 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301406 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301412 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301659 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301687 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301706 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301716 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.303954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.306575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.306577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.323992 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346541 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346579 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346779 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346864 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.449932 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.451767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.451862 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452277 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452453 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.454709 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.456418 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" path="/var/lib/kubelet/pods/8afeb982-5b6c-4224-a38d-ce53a6e37f86/volumes" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.457320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.459113 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.461000 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-gdxlf scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="72eeae29-5189-4fbd-936f-62c4bbe94388" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.476333 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.477042 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.479207 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.480322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.488504 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.488773 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.491118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.922330 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.933073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.067588 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068061 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068168 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068327 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068392 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.073186 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.075468 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.075840 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.077124 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf" (OuterVolumeSpecName: "kube-api-access-gdxlf") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "kube-api-access-gdxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.077221 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data" (OuterVolumeSpecName: "config-data") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.078722 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.085751 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts" (OuterVolumeSpecName: "scripts") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172125 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172178 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172189 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172200 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172210 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172218 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172226 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.630129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.787081 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.788418 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789095 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789249 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789361 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.788322 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs" (OuterVolumeSpecName: "logs") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790142 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790600 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.791663 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.791735 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.794297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts" (OuterVolumeSpecName: "scripts") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.796185 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5" (OuterVolumeSpecName: "kube-api-access-j75f5") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "kube-api-access-j75f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.852583 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.865280 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (OuterVolumeSpecName: "glance") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.879841 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.883809 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data" (OuterVolumeSpecName: "config-data") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894211 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894439 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894504 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894559 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894643 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894704 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.938814 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" exitCode=0 Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.939582 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee"} Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940674 4705 scope.go:117] "RemoveContainer" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.939313 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.941115 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.941417 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e") on node "crc" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.985622 4705 scope.go:117] "RemoveContainer" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.000817 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.014957 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.039801 4705 scope.go:117] "RemoveContainer" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.042543 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": container with ID starting with 0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870 not found: ID does not exist" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.042649 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} err="failed to get container status \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": rpc error: code = NotFound desc = could not find container \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": container with ID starting with 0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870 not found: ID does not exist" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.042696 4705 scope.go:117] "RemoveContainer" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.043562 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": container with ID starting with 7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89 not found: ID does not exist" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.043650 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} err="failed to get container status \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": rpc error: code = NotFound desc = could not find container \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": container with ID starting with 7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89 not found: ID does not exist" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.091838 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.155462 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.189151 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.204618 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.205590 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.205637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.205671 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.205677 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.206025 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.206068 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.227218 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.233437 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.233757 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.259820 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.292322 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.294953 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.297483 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.299925 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.312060 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72eeae29_5189_4fbd_936f_62c4bbe94388.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2678da20_6fd3_430b_8841_40842382c4fb.slice/crio-365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f8a7c2_28a1_45b0_ac6a_9b6f33ac1a73.slice/crio-c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee\": RecentStats: unable to find data in memory cache]" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.325601 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327392 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327437 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.328141 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.328550 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431215 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431238 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431290 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431306 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431358 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431589 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431633 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431658 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.435893 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.436582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.442110 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.442649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.447181 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72eeae29-5189-4fbd-936f-62c4bbe94388" path="/var/lib/kubelet/pods/72eeae29-5189-4fbd-936f-62c4bbe94388/volumes" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.448450 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" path="/var/lib/kubelet/pods/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73/volumes" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.454549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.461246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.473853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.535544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536436 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536958 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.541106 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.545910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536039 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.548750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.551183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.551565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.577448 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.577499 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.578004 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.641214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.683952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.753186 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.846609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.847016 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848588 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848638 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848683 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848715 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.849748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs" (OuterVolumeSpecName: "logs") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.851046 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.852885 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.852914 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.865550 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts" (OuterVolumeSpecName: "scripts") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.873355 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6" (OuterVolumeSpecName: "kube-api-access-v6jf6") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "kube-api-access-v6jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.900888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (OuterVolumeSpecName: "glance") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.919484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.950207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956229 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956271 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956286 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956329 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.957104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data" (OuterVolumeSpecName: "config-data") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.969751 4705 generic.go:334] "Generic (PLEG): container finished" podID="2678da20-6fd3-430b-8841-40842382c4fb" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" exitCode=0 Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970030 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"15fe536f2d1e7276c5b6aa9bd3efbc8aff43c887dcf49127f48384d48325f958"} Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970204 4705 scope.go:117] "RemoveContainer" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970497 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.021248 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.035305 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.035809 4705 scope.go:117] "RemoveContainer" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.041021 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157") on node "crc" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059259 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059304 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059317 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.078268 4705 scope.go:117] "RemoveContainer" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.079827 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": container with ID starting with 365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643 not found: ID does not exist" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.079880 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} err="failed to get container status \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": rpc error: code = NotFound desc = could not find container \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": container with ID starting with 365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643 not found: ID does not exist" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.079907 4705 scope.go:117] "RemoveContainer" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.085384 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": container with ID starting with dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9 not found: ID does not exist" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.085429 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} err="failed to get container status \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": rpc error: code = NotFound desc = could not find container \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": container with ID starting with dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9 not found: ID does not exist" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.235800 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.238319 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.240298 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.240342 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.330461 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.354842 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.371868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.389572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.390269 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390290 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.390328 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390335 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390601 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390651 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.392221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.396508 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.397204 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.403007 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.482780 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483260 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483562 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483720 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.586986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587047 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587460 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.588263 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.588741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.594740 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.597315 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.598052 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.604194 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.604243 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.621034 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.626173 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.973964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.001581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"1d1c90bbe89df2444f211fbae43512bd74e7492f1d6052bd985eae43052c1133"} Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.004841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.005188 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"42409d76c6328a1e20bf61bf099083b597e7541e6ba3851697295b44a1a71728"} Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.053551 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.440475 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2678da20-6fd3-430b-8841-40842382c4fb" path="/var/lib/kubelet/pods/2678da20-6fd3-430b-8841-40842382c4fb/volumes" Feb 16 15:16:30 crc kubenswrapper[4705]: W0216 15:16:30.800544 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ba576c_ee01_48ea_b78b_a2bea81b90a2.slice/crio-b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444 WatchSource:0}: Error finding container b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444: Status 404 returned error can't find the container with id b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444 Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.801158 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.037503 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.050031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444"} Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.061564 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"fc15a68c46e4a01f1bfd32ecf47726ae0ce0940adb334ef0150d24181c9ce669"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.084605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"7233b5edcef46f692b6133525276a43ea82217316fe4a9039c193bc50033373b"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.092495 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"ba000d59d264edaf8176f1a2f76b35d2d7f5a1361b20eec29741f809ac8aed78"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.107153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.134714 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.134686319 podStartE2EDuration="4.134686319s" podCreationTimestamp="2026-02-16 15:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:32.123974788 +0000 UTC m=+1386.308951874" watchObservedRunningTime="2026-02-16 15:16:32.134686319 +0000 UTC m=+1386.319663395" Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.128913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"2e53c8695ea3efda2c26ae056ef8a355b94fc82df9cb941815930162fec0b6de"} Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.137209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.816704 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.8166777530000005 podStartE2EDuration="4.816677753s" podCreationTimestamp="2026-02-16 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:33.173814933 +0000 UTC m=+1387.358792009" watchObservedRunningTime="2026-02-16 15:16:33.816677753 +0000 UTC m=+1388.001654829" Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.827978 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.151835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.151905 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.187361 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216078701 podStartE2EDuration="6.187338655s" podCreationTimestamp="2026-02-16 15:16:28 +0000 UTC" firstStartedPulling="2026-02-16 15:16:29.347392089 +0000 UTC m=+1383.532369165" lastFinishedPulling="2026-02-16 15:16:33.318652043 +0000 UTC m=+1387.503629119" observedRunningTime="2026-02-16 15:16:34.174771641 +0000 UTC m=+1388.359748757" watchObservedRunningTime="2026-02-16 15:16:34.187338655 +0000 UTC m=+1388.372315721" Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.233178 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.234552 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.236559 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.236614 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159774 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" containerID="cri-o://c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" containerID="cri-o://f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" containerID="cri-o://b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" containerID="cri-o://ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" gracePeriod=30 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174355 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" exitCode=0 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174910 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" exitCode=2 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174926 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" exitCode=0 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174988 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.175011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.943492 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.946266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.952154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.952220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.980000 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.076510 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.082626 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.105904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.106019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.214158 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.221515 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.221610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.225197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.234761 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.234904 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.244593 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.244633 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.245734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.246612 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.247641 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.251426 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.251473 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.277445 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.281530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.324735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.325226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.431745 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.432121 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.433290 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.465669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.580423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.914203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.055301 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.055436 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.125047 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.158867 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.261211 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a49bd2f-26b0-4969-86db-cd980251a202" containerID="6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" exitCode=137 Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.262549 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerDied","Data":"6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60"} Feb 16 15:16:40 crc kubenswrapper[4705]: W0216 15:16:40.268899 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod885bde30_8f11_4a3f_b1ed_db26e4aa4ab2.slice/crio-27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d WatchSource:0}: Error finding container 27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d: Status 404 returned error can't find the container with id 27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.276396 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerStarted","Data":"c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5"} Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.276591 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.277067 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerStarted","Data":"9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77"} Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.277959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.299179 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.320498 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-sz982" podStartSLOduration=2.32046625 podStartE2EDuration="2.32046625s" podCreationTimestamp="2026-02-16 15:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:40.297606836 +0000 UTC m=+1394.482583912" watchObservedRunningTime="2026-02-16 15:16:40.32046625 +0000 UTC m=+1394.505443326" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.560428 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674290 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674411 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674514 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.692758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.692837 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8" (OuterVolumeSpecName: "kube-api-access-m8dr8") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "kube-api-access-m8dr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.760003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778400 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778441 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778452 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.784543 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data" (OuterVolumeSpecName: "config-data") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.881070 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.224244 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.290699 4705 generic.go:334] "Generic (PLEG): container finished" podID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerID="c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.290775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerDied","Data":"c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293036 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293486 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293665 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293878 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.298164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.299736 4705 generic.go:334] "Generic (PLEG): container finished" podID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerID="550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerDied","Data":"550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300170 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerStarted","Data":"27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300224 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.306821 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc" (OuterVolumeSpecName: "kube-api-access-z7lqc") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "kube-api-access-z7lqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.306909 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts" (OuterVolumeSpecName: "scripts") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerDied","Data":"75b8ea33afa2dc74710b8197cd60788f65dd6c58802ff69550dde775ef900e97"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316542 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316552 4705 scope.go:117] "RemoveContainer" containerID="6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.343430 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344082 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344766 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"1d1c90bbe89df2444f211fbae43512bd74e7492f1d6052bd985eae43052c1133"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.372716 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.399196 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.402910 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403151 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403229 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403305 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.481023 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.506036 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.549755 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data" (OuterVolumeSpecName: "config-data") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.608988 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.629023 4705 scope.go:117] "RemoveContainer" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.640246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.660650 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.661417 4705 scope.go:117] "RemoveContainer" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.714915 4705 scope.go:117] "RemoveContainer" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.750162 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.771915 4705 scope.go:117] "RemoveContainer" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.782411 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.798488 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799507 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799578 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799616 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799648 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799662 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799669 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799691 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799699 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799735 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799743 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800129 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800157 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800195 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800210 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.802901 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.807824 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.808289 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.817378 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.826223 4705 scope.go:117] "RemoveContainer" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.827640 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": container with ID starting with b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520 not found: ID does not exist" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.827732 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} err="failed to get container status \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": rpc error: code = NotFound desc = could not find container \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": container with ID starting with b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.827815 4705 scope.go:117] "RemoveContainer" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.828329 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": container with ID starting with f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094 not found: ID does not exist" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.828424 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} err="failed to get container status \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": rpc error: code = NotFound desc = could not find container \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": container with ID starting with f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.828501 4705 scope.go:117] "RemoveContainer" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.828983 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": container with ID starting with ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466 not found: ID does not exist" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829062 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} err="failed to get container status \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": rpc error: code = NotFound desc = could not find container \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": container with ID starting with ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829123 4705 scope.go:117] "RemoveContainer" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.829397 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": container with ID starting with c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14 not found: ID does not exist" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829482 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} err="failed to get container status \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": rpc error: code = NotFound desc = could not find container \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": container with ID starting with c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.939999 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940078 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940389 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.042702 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043222 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043492 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043891 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.044047 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.044293 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.051889 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.053346 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.054146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.059116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.070658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.138797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.378756 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.379082 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.470635 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" path="/var/lib/kubelet/pods/3a49bd2f-26b0-4969-86db-cd980251a202/volumes" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.471580 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" path="/var/lib/kubelet/pods/ad341212-f2ac-4c6d-81cd-1113a9a524b2/volumes" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.141560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.212505 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.218180 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.218328 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.220949 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.345608 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.358015 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.419197 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433119 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433111 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerDied","Data":"27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433528 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.435797 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.436980 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.437158 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerDied","Data":"9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.437181 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514655 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514720 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"481dd88a-36b9-432c-9d21-9221f5e98e6e\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514978 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"481dd88a-36b9-432c-9d21-9221f5e98e6e\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.516602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "481dd88a-36b9-432c-9d21-9221f5e98e6e" (UID: "481dd88a-36b9-432c-9d21-9221f5e98e6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.516732 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" (UID: "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.519134 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.519166 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.522642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6" (OuterVolumeSpecName: "kube-api-access-vfzk6") pod "481dd88a-36b9-432c-9d21-9221f5e98e6e" (UID: "481dd88a-36b9-432c-9d21-9221f5e98e6e"). InnerVolumeSpecName "kube-api-access-vfzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.532895 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn" (OuterVolumeSpecName: "kube-api-access-6d9jn") pod "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" (UID: "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2"). InnerVolumeSpecName "kube-api-access-6d9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.622286 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.622336 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.873785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.233476 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.236135 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.237878 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.237939 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:44 crc kubenswrapper[4705]: I0216 15:16:44.461924 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47"} Feb 16 15:16:45 crc kubenswrapper[4705]: I0216 15:16:45.474113 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553"} Feb 16 15:16:46 crc kubenswrapper[4705]: I0216 15:16:46.499326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30"} Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.521038 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1"} Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.522119 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.553388 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.146810362 podStartE2EDuration="6.553350587s" podCreationTimestamp="2026-02-16 15:16:41 +0000 UTC" firstStartedPulling="2026-02-16 15:16:43.241155529 +0000 UTC m=+1397.426132595" lastFinishedPulling="2026-02-16 15:16:46.647695754 +0000 UTC m=+1400.832672820" observedRunningTime="2026-02-16 15:16:47.544500248 +0000 UTC m=+1401.729477334" watchObservedRunningTime="2026-02-16 15:16:47.553350587 +0000 UTC m=+1401.738327673" Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.232992 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.235458 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.237899 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.237948 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556109 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.556800 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556824 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.556869 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556881 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.557429 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.557494 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.558609 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.564266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.564785 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.565494 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.565752 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.576633 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.601736 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.602160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.602732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.603289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.706872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707221 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.715138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.717024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.724404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.728219 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.894798 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:50 crc kubenswrapper[4705]: I0216 15:16:50.440504 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:50 crc kubenswrapper[4705]: W0216 15:16:50.448051 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf60aeda_83a7_4d56_95a6_c390c2d08b8a.slice/crio-0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511 WatchSource:0}: Error finding container 0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511: Status 404 returned error can't find the container with id 0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511 Feb 16 15:16:50 crc kubenswrapper[4705]: I0216 15:16:50.557523 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerStarted","Data":"0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511"} Feb 16 15:16:53 crc kubenswrapper[4705]: I0216 15:16:53.635678 4705 generic.go:334] "Generic (PLEG): container finished" podID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" exitCode=137 Feb 16 15:16:53 crc kubenswrapper[4705]: I0216 15:16:53.636027 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerDied","Data":"d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299"} Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.231884 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.232268 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.232984 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.233186 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.255142 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287758 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287882 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.303680 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw" (OuterVolumeSpecName: "kube-api-access-729zw") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "kube-api-access-729zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.325682 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data" (OuterVolumeSpecName: "config-data") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.326231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391875 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391925 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391938 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660644 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerDied","Data":"89773dd4ddcc151bb2dd44670cb30683011d6f4c21b57a0cc856f9fb0cb8aa40"} Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660731 4705 scope.go:117] "RemoveContainer" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.662890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerStarted","Data":"156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725"} Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.688852 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6brrx" podStartSLOduration=2.381264803 podStartE2EDuration="6.688835601s" podCreationTimestamp="2026-02-16 15:16:49 +0000 UTC" firstStartedPulling="2026-02-16 15:16:50.450522293 +0000 UTC m=+1404.635499369" lastFinishedPulling="2026-02-16 15:16:54.758093081 +0000 UTC m=+1408.943070167" observedRunningTime="2026-02-16 15:16:55.683856251 +0000 UTC m=+1409.868833327" watchObservedRunningTime="2026-02-16 15:16:55.688835601 +0000 UTC m=+1409.873812677" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.772999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.790197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.823826 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: E0216 15:16:55.824505 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.824529 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.824839 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.826144 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.829182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.829479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.852776 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.908833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.908923 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.909199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011234 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.015883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.026168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.040129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.149364 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.444099 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" path="/var/lib/kubelet/pods/3e47e02d-1f4b-44d5-b6c7-d12353efb4db/volumes" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.710042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:56 crc kubenswrapper[4705]: W0216 15:16:56.718586 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d5bb097_aa56_4b02_942e_70b894afe84a.slice/crio-fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666 WatchSource:0}: Error finding container fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666: Status 404 returned error can't find the container with id fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666 Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.703191 4705 generic.go:334] "Generic (PLEG): container finished" podID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerID="156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725" exitCode=0 Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.703304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerDied","Data":"156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.706261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d5bb097-aa56-4b02-942e-70b894afe84a","Type":"ContainerStarted","Data":"81e8f4e116902e62b97158e552714c3661e953fa5a6ad6d50ae6d9172f24e2f0"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.707078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d5bb097-aa56-4b02-942e-70b894afe84a","Type":"ContainerStarted","Data":"fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.707181 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.780069 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.780030022 podStartE2EDuration="2.780030022s" podCreationTimestamp="2026-02-16 15:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:57.756061026 +0000 UTC m=+1411.941038122" watchObservedRunningTime="2026-02-16 15:16:57.780030022 +0000 UTC m=+1411.965007138" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.187479 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.312329 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.312982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.313167 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.313435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.320151 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np" (OuterVolumeSpecName: "kube-api-access-jg2np") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "kube-api-access-jg2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.326616 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts" (OuterVolumeSpecName: "scripts") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.347910 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.352541 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data" (OuterVolumeSpecName: "config-data") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416346 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416397 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416409 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416421 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerDied","Data":"0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511"} Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732513 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732538 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.184872 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.672027 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:01 crc kubenswrapper[4705]: E0216 15:17:01.672926 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.672958 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.673256 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.674537 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.676873 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.680458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.692600 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801825 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801928 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.837982 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.839903 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.844778 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.892524 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.914863 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.915143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.915697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.959686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.965399 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.968620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.979273 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:01.999356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.001009 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.001851 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.021095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037413 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.085722 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.087949 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.100481 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.151220 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.152838 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.152995 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153108 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153174 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.189520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190532 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.191278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.191708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.214831 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.223291 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257360 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.274890 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.276817 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.282704 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.324430 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.355963 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.358755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361556 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.362141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.363863 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.374114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.379841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.416207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.449345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.470804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.489851 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.520401 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.521016 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.521149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523494 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.539304 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.544080 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.546607 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626852 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.627722 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.632836 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.636669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.642920 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.663689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.665710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.676852 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.682232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.729871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.729989 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730047 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730104 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.740036 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.769902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839897 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.840040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.840073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.841073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842286 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.843266 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.877120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.934581 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.957048 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.302076 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: W0216 15:17:03.549569 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280 WatchSource:0}: Error finding container 107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280: Status 404 returned error can't find the container with id 107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280 Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.598182 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.617613 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.880785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerStarted","Data":"fc7c9ea585cc1fde92feb6b64f7c9647742d877ff5656a5cd26ed4a40b9bc589"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.909513 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.934360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"1536a95ab5596e441f283dcccf66e85b779a0237afc5c6e0d01652df6f0e34b4"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.934547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.936668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.958819 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.963699 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.966198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerStarted","Data":"014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.966249 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerStarted","Data":"21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.979752 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.078788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.078918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.079293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.079417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.190585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191075 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191159 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.206119 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.206395 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.248675 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.251686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.255965 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.275053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.348976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.357671 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v8zp2" podStartSLOduration=3.357647729 podStartE2EDuration="3.357647729s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:04.095956607 +0000 UTC m=+1418.280933693" watchObservedRunningTime="2026-02-16 15:17:04.357647729 +0000 UTC m=+1418.542624805" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.564174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.993909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerStarted","Data":"2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.000843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"dee0ea11222770d7565040c2a8d452d725637a688407fbd260ff2426c890c0e6"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.020596 4705 generic.go:334] "Generic (PLEG): container finished" podID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" exitCode=0 Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.022525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.022608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerStarted","Data":"ede06e3254a42f9f6eec0ac56c7e1b7e4b102971ccf37608944546f6accc4101"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.280198 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.053475 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerStarted","Data":"5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.054014 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerStarted","Data":"f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.056761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerStarted","Data":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.057035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.086737 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c29kz" podStartSLOduration=3.086717499 podStartE2EDuration="3.086717499s" podCreationTimestamp="2026-02-16 15:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:06.081963115 +0000 UTC m=+1420.266940191" watchObservedRunningTime="2026-02-16 15:17:06.086717499 +0000 UTC m=+1420.271694575" Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.135985 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.153592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.167264 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" podStartSLOduration=4.1672335369999995 podStartE2EDuration="4.167233537s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:06.104469009 +0000 UTC m=+1420.289446085" watchObservedRunningTime="2026-02-16 15:17:06.167233537 +0000 UTC m=+1420.352210613" Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.889237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.890205 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" containerID="cri-o://48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891330 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" containerID="cri-o://637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891396 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" containerID="cri-o://56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891436 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" containerID="cri-o://dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.904668 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": EOF" Feb 16 15:17:09 crc kubenswrapper[4705]: I0216 15:17:09.121095 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" exitCode=2 Feb 16 15:17:09 crc kubenswrapper[4705]: I0216 15:17:09.121181 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.135796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerStarted","Data":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.136230 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.137855 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.140909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.140942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.141108 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" containerID="cri-o://4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.141145 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" containerID="cri-o://a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.144963 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerStarted","Data":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149563 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" exitCode=0 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149601 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" exitCode=0 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.151828 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.151890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.164194 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.985916268 podStartE2EDuration="8.164175785s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="2026-02-16 15:17:04.148035494 +0000 UTC m=+1418.333012570" lastFinishedPulling="2026-02-16 15:17:09.326295011 +0000 UTC m=+1423.511272087" observedRunningTime="2026-02-16 15:17:10.163053433 +0000 UTC m=+1424.348030509" watchObservedRunningTime="2026-02-16 15:17:10.164175785 +0000 UTC m=+1424.349152861" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.200384 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.440189565 podStartE2EDuration="9.200348744s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.57175859 +0000 UTC m=+1417.756735666" lastFinishedPulling="2026-02-16 15:17:09.331917769 +0000 UTC m=+1423.516894845" observedRunningTime="2026-02-16 15:17:10.197323659 +0000 UTC m=+1424.382300735" watchObservedRunningTime="2026-02-16 15:17:10.200348744 +0000 UTC m=+1424.385325820" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.274560 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.261138752 podStartE2EDuration="9.272943629s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.309881774 +0000 UTC m=+1417.494858850" lastFinishedPulling="2026-02-16 15:17:09.321686651 +0000 UTC m=+1423.506663727" observedRunningTime="2026-02-16 15:17:10.218764253 +0000 UTC m=+1424.403741329" watchObservedRunningTime="2026-02-16 15:17:10.272943629 +0000 UTC m=+1424.457920705" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.290829 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.046143955 podStartE2EDuration="8.290806092s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="2026-02-16 15:17:04.076582301 +0000 UTC m=+1418.261559387" lastFinishedPulling="2026-02-16 15:17:09.321244448 +0000 UTC m=+1423.506221524" observedRunningTime="2026-02-16 15:17:10.247179933 +0000 UTC m=+1424.432157019" watchObservedRunningTime="2026-02-16 15:17:10.290806092 +0000 UTC m=+1424.475783168" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.689021 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:11 crc kubenswrapper[4705]: I0216 15:17:11.189965 4705 generic.go:334] "Generic (PLEG): container finished" podID="c403fb44-6250-449b-b257-953b925c635a" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" exitCode=143 Feb 16 15:17:11 crc kubenswrapper[4705]: I0216 15:17:11.191316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.205676 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" exitCode=0 Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.205752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.206673 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.206686 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.209435 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.230866 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292337 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292527 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292614 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292660 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.294102 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.299516 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.304078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts" (OuterVolumeSpecName: "scripts") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.304253 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7" (OuterVolumeSpecName: "kube-api-access-dw6k7") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "kube-api-access-dw6k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.355888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395823 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395869 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395879 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395887 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395896 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.412435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.469678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data" (OuterVolumeSpecName: "config-data") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.471914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.471964 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.484651 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.484708 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.501463 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.501498 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.545037 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.741379 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.770300 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.770353 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.936540 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.024457 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.024698 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" containerID="cri-o://2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" gracePeriod=10 Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.250423 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerID="2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" exitCode=0 Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.250932 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a"} Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.251015 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.341352 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.405260 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.452608 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467758 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467777 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467791 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467797 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467814 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467835 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467842 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468121 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468158 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468173 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468184 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.470751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.473631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.478428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.481870 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551720 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551794 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551831 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.555660 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.555697 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.653991 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654190 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654245 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654319 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.655035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.661283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.661756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.665057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.677260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.678139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.805120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.464045 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" path="/var/lib/kubelet/pods/881aa943-ed5c-4d96-aa9e-3942b76d8e1a/volumes" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.481851 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601207 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601453 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.604358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.620644 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56" (OuterVolumeSpecName: "kube-api-access-lkr56") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "kube-api-access-lkr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.690502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705555 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705589 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.736908 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config" (OuterVolumeSpecName: "config") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.750923 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.766195 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.808966 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809016 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809033 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809048 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.361926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.369917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"6acd1944658746507adf3b4af992bae06e651f8bf8b1f5ec60b84795bec2d1f1"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.370021 4705 scope.go:117] "RemoveContainer" containerID="2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.370232 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.374452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"4259daba5069f1ad1d3855f14ca5d403733a2ff26df6d21b1e554e1a1f3397e0"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.420515 4705 scope.go:117] "RemoveContainer" containerID="af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.446584 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.459352 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.309137 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.410296 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.413778 4705 generic.go:334] "Generic (PLEG): container finished" podID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerID="5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306" exitCode=0 Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.413849 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerDied","Data":"5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306"} Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.437315 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerID="014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb" exitCode=0 Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.451636 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" path="/var/lib/kubelet/pods/6f14f59b-5faf-48e0-bbdc-7f97c3836a35/volumes" Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.452452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerDied","Data":"014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.464587 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472556 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" containerID="cri-o://e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472640 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" containerID="cri-o://b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472622 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" containerID="cri-o://1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472669 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" containerID="cri-o://ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.525956 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.6027582750000002 podStartE2EDuration="16.525931432s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.561642125 +0000 UTC m=+1417.746619201" lastFinishedPulling="2026-02-16 15:17:16.484815282 +0000 UTC m=+1430.669792358" observedRunningTime="2026-02-16 15:17:17.494834506 +0000 UTC m=+1431.679811602" watchObservedRunningTime="2026-02-16 15:17:17.525931432 +0000 UTC m=+1431.710908508" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.299472 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.304653 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.379783 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380334 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380892 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380969 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.421016 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts" (OuterVolumeSpecName: "scripts") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.421058 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw" (OuterVolumeSpecName: "kube-api-access-jx8mw") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "kube-api-access-jx8mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.423215 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5" (OuterVolumeSpecName: "kube-api-access-lfrg5") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "kube-api-access-lfrg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.443576 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts" (OuterVolumeSpecName: "scripts") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509118 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509163 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509173 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509182 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.538738 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.540528 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data" (OuterVolumeSpecName: "config-data") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerDied","Data":"f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543805 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543968 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.576018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data" (OuterVolumeSpecName: "config-data") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585698 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585741 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585750 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.586024 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.597572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598131 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598204 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598218 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598244 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598252 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598265 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598271 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598304 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="init" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598312 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="init" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604363 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604457 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604492 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606092 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerDied","Data":"21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606137 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606245 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.616263 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626059 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626121 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626132 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626142 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.707361 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729629 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729782 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.762983 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.763281 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" containerID="cri-o://9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789333 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789633 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" containerID="cri-o://68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789786 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" containerID="cri-o://5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.834549 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.843665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.845858 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.846168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.860312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.864191 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.936603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.535040 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.639180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d","Type":"ContainerStarted","Data":"b1ad318ec09dd1620386968e6fa2b491069c7d48c8f5fd9f5f0d017edb59be8d"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.669502 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.669765 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" containerID="cri-o://f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.670168 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671494 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" containerID="cri-o://3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671578 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" containerID="cri-o://8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671740 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" containerID="cri-o://a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.683880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.683884 4705 generic.go:334] "Generic (PLEG): container finished" podID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" exitCode=143 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.705851 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.449539698 podStartE2EDuration="6.705826561s" podCreationTimestamp="2026-02-16 15:17:13 +0000 UTC" firstStartedPulling="2026-02-16 15:17:14.621899422 +0000 UTC m=+1428.806876498" lastFinishedPulling="2026-02-16 15:17:18.878186285 +0000 UTC m=+1433.063163361" observedRunningTime="2026-02-16 15:17:19.692824775 +0000 UTC m=+1433.877801851" watchObservedRunningTime="2026-02-16 15:17:19.705826561 +0000 UTC m=+1433.890803637" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.537287 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699496 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699536 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" exitCode=2 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699545 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701656 4705 generic.go:334] "Generic (PLEG): container finished" podID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701745 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerDied","Data":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701788 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerDied","Data":"fc7c9ea585cc1fde92feb6b64f7c9647742d877ff5656a5cd26ed4a40b9bc589"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701815 4705 scope.go:117] "RemoveContainer" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701956 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.705292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d","Type":"ContainerStarted","Data":"d50eec337da913870a7bb170ef7c4121a92ee2d0dbee040bfc9c39f0b41bb21a"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.705429 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.714836 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.714934 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.715067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.735945 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg" (OuterVolumeSpecName: "kube-api-access-4vrqg") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "kube-api-access-4vrqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.740801 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.740783187 podStartE2EDuration="2.740783187s" podCreationTimestamp="2026-02-16 15:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:20.730439786 +0000 UTC m=+1434.915416862" watchObservedRunningTime="2026-02-16 15:17:20.740783187 +0000 UTC m=+1434.925760263" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.742643 4705 scope.go:117] "RemoveContainer" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: E0216 15:17:20.743225 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": container with ID starting with 9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea not found: ID does not exist" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.743261 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} err="failed to get container status \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": rpc error: code = NotFound desc = could not find container \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": container with ID starting with 9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.764771 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data" (OuterVolumeSpecName: "config-data") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.772381 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818508 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818546 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818561 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.100929 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.112699 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.129969 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: E0216 15:17:21.130613 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.130632 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.130899 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.131839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.134225 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.148792 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238346 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238448 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238727 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.341501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.342066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.342185 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.355610 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.359067 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.379149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.459151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: E0216 15:17:21.944522 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd192f950_fab8_43a1_828b_4bc1613acb4f.slice/crio-5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.059120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: W0216 15:17:22.079948 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24dafc8c_fbe7_45cc_9558_fad23223b4d0.slice/crio-3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814 WatchSource:0}: Error finding container 3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814: Status 404 returned error can't find the container with id 3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814 Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.440503 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" path="/var/lib/kubelet/pods/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac/volumes" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.545807 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.693890 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694485 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs" (OuterVolumeSpecName: "logs") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694658 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.695503 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.713852 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44" (OuterVolumeSpecName: "kube-api-access-rbn44") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "kube-api-access-rbn44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.730819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data" (OuterVolumeSpecName: "config-data") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.744952 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.758977 4705 generic.go:334] "Generic (PLEG): container finished" podID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" exitCode=0 Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759058 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759058 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759158 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"1536a95ab5596e441f283dcccf66e85b779a0237afc5c6e0d01652df6f0e34b4"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759186 4705 scope.go:117] "RemoveContainer" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.761107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerStarted","Data":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.761155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerStarted","Data":"3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798691 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798726 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798738 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.804141 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8041101419999999 podStartE2EDuration="1.804110142s" podCreationTimestamp="2026-02-16 15:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:22.788550304 +0000 UTC m=+1436.973527380" watchObservedRunningTime="2026-02-16 15:17:22.804110142 +0000 UTC m=+1436.989087218" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.831449 4705 scope.go:117] "RemoveContainer" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.845344 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.870540 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.885219 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.886081 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886109 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.886168 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886178 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886512 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886543 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.888315 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.888932 4705 scope.go:117] "RemoveContainer" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.889594 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": container with ID starting with 5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742 not found: ID does not exist" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.889635 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} err="failed to get container status \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": rpc error: code = NotFound desc = could not find container \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": container with ID starting with 5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742 not found: ID does not exist" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.889669 4705 scope.go:117] "RemoveContainer" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.890366 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": container with ID starting with 68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d not found: ID does not exist" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.890417 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} err="failed to get container status \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": rpc error: code = NotFound desc = could not find container \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": container with ID starting with 68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d not found: ID does not exist" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.891483 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.901042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109906 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.111000 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.118030 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.118115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.133686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.215981 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.764798 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.433707 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" path="/var/lib/kubelet/pods/d192f950-fab8-43a1-828b-4bc1613acb4f/volumes" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.784283 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.805917 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" exitCode=0 Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806012 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806047 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"4259daba5069f1ad1d3855f14ca5d403733a2ff26df6d21b1e554e1a1f3397e0"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806067 4705 scope.go:117] "RemoveContainer" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806148 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"b9e16d20a34c818b351cfcb18e6ae185d36b1c587820242a2f7a8a4d81bd9408"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.854598 4705 scope.go:117] "RemoveContainer" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.880994 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.88097043 podStartE2EDuration="2.88097043s" podCreationTimestamp="2026-02-16 15:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:24.865230276 +0000 UTC m=+1439.050207362" watchObservedRunningTime="2026-02-16 15:17:24.88097043 +0000 UTC m=+1439.065947516" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.883812 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884020 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884158 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884550 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884605 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.886479 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.887040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.904513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts" (OuterVolumeSpecName: "scripts") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.904511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx" (OuterVolumeSpecName: "kube-api-access-rs7dx") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "kube-api-access-rs7dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.921218 4705 scope.go:117] "RemoveContainer" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.957981 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.958602 4705 scope.go:117] "RemoveContainer" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.987712 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988287 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988303 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988347 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988362 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988423 4705 scope.go:117] "RemoveContainer" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.989035 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": container with ID starting with a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071 not found: ID does not exist" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989109 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} err="failed to get container status \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": rpc error: code = NotFound desc = could not find container \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": container with ID starting with a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071 not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989166 4705 scope.go:117] "RemoveContainer" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.989622 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": container with ID starting with 8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f not found: ID does not exist" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} err="failed to get container status \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": rpc error: code = NotFound desc = could not find container \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": container with ID starting with 8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989678 4705 scope.go:117] "RemoveContainer" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.990003 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": container with ID starting with 3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a not found: ID does not exist" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990033 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} err="failed to get container status \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": rpc error: code = NotFound desc = could not find container \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": container with ID starting with 3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990071 4705 scope.go:117] "RemoveContainer" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.990315 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": container with ID starting with f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f not found: ID does not exist" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990356 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} err="failed to get container status \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": rpc error: code = NotFound desc = could not find container \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": container with ID starting with f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f not found: ID does not exist" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.007206 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.070907 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data" (OuterVolumeSpecName: "config-data") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.090020 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.090056 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.190695 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.206276 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.262856 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263612 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263631 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263644 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263652 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263684 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263691 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263714 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263720 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263982 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263999 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.264005 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.266437 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.279577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.279645 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.304579 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.408864 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409029 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409107 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409219 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512267 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.513655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.517822 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.517963 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.518073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.526032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.531842 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.532838 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.606733 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.094017 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.460790 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" path="/var/lib/kubelet/pods/f1c48521-25a2-4bd8-be3f-ad6da6409486/volumes" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.462953 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.840550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82"} Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.841357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39"} Feb 16 15:17:27 crc kubenswrapper[4705]: I0216 15:17:27.856515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2"} Feb 16 15:17:28 crc kubenswrapper[4705]: I0216 15:17:28.880346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a"} Feb 16 15:17:28 crc kubenswrapper[4705]: I0216 15:17:28.997057 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.893199 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c"} Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.893873 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.926319 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5737448870000001 podStartE2EDuration="4.926301601s" podCreationTimestamp="2026-02-16 15:17:25 +0000 UTC" firstStartedPulling="2026-02-16 15:17:26.101689418 +0000 UTC m=+1440.286666494" lastFinishedPulling="2026-02-16 15:17:29.454246132 +0000 UTC m=+1443.639223208" observedRunningTime="2026-02-16 15:17:29.920836347 +0000 UTC m=+1444.105813443" watchObservedRunningTime="2026-02-16 15:17:29.926301601 +0000 UTC m=+1444.111278677" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.460422 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.525194 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.971761 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:17:33 crc kubenswrapper[4705]: I0216 15:17:33.216679 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:33 crc kubenswrapper[4705]: I0216 15:17:33.217137 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:34 crc kubenswrapper[4705]: I0216 15:17:34.307640 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:34 crc kubenswrapper[4705]: I0216 15:17:34.307631 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.800411 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.812143 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.893859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894014 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894287 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894860 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.898900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs" (OuterVolumeSpecName: "logs") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.904255 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h" (OuterVolumeSpecName: "kube-api-access-gmx7h") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "kube-api-access-gmx7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.907236 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h" (OuterVolumeSpecName: "kube-api-access-6245h") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "kube-api-access-6245h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.936520 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data" (OuterVolumeSpecName: "config-data") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.937832 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.947194 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.954149 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data" (OuterVolumeSpecName: "config-data") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000160 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000183 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000203 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000224 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000244 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000265 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114195 4705 generic.go:334] "Generic (PLEG): container finished" podID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" exitCode=137 Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerDied","Data":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.115028 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerDied","Data":"2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114635 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.115076 4705 scope.go:117] "RemoveContainer" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118061 4705 generic.go:334] "Generic (PLEG): container finished" podID="c403fb44-6250-449b-b257-953b925c635a" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" exitCode=137 Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"dee0ea11222770d7565040c2a8d452d725637a688407fbd260ff2426c890c0e6"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118282 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.175683 4705 scope.go:117] "RemoveContainer" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.185810 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": container with ID starting with d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c not found: ID does not exist" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.185868 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} err="failed to get container status \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": rpc error: code = NotFound desc = could not find container \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": container with ID starting with d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.185903 4705 scope.go:117] "RemoveContainer" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.206811 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.242069 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.255146 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.266905 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.274646 4705 scope.go:117] "RemoveContainer" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.290574 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292239 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292263 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292298 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292310 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292408 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292422 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296006 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296053 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296107 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.298014 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.301914 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.304573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.304905 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315300 4705 scope.go:117] "RemoveContainer" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315466 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.315936 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": container with ID starting with a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51 not found: ID does not exist" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315973 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} err="failed to get container status \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": rpc error: code = NotFound desc = could not find container \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": container with ID starting with a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51 not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.316003 4705 scope.go:117] "RemoveContainer" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.316580 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": container with ID starting with 4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035 not found: ID does not exist" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.316663 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} err="failed to get container status \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": rpc error: code = NotFound desc = could not find container \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": container with ID starting with 4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035 not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.334544 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.344342 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.347234 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.348098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.360029 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.419523 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.420813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.420884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421067 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421508 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421613 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.525823 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532341 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532527 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.533237 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.533980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.535969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.536074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.544571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.546616 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.640205 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.664191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.139252 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.193836 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.205860 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.437315 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" path="/var/lib/kubelet/pods/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c/volumes" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.438181 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c403fb44-6250-449b-b257-953b925c635a" path="/var/lib/kubelet/pods/c403fb44-6250-449b-b257-953b925c635a/volumes" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.155856 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b49f6329-2396-4d3e-9b28-2dd3586b1965","Type":"ContainerStarted","Data":"7e7475ab313e465395ff2e16f5d62cebf15b40dcf04162ce2e50542d92f6cb80"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.156586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b49f6329-2396-4d3e-9b28-2dd3586b1965","Type":"ContainerStarted","Data":"ae104e1efd72f98ec608627f695ab716a8c8a1949b6a9a044342387b09347f55"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157439 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"eeec271298f4dcb2eb43a0a1c49fcdad72fcc161271d50f3ad69a11322b20f9c"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.195061 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.195019172 podStartE2EDuration="2.195019172s" podCreationTimestamp="2026-02-16 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:43.181621235 +0000 UTC m=+1457.366598351" watchObservedRunningTime="2026-02-16 15:17:43.195019172 +0000 UTC m=+1457.379996258" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.222792 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.225394 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.225838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.230317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.258986 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.258957283 podStartE2EDuration="2.258957283s" podCreationTimestamp="2026-02-16 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:43.207397631 +0000 UTC m=+1457.392374717" watchObservedRunningTime="2026-02-16 15:17:43.258957283 +0000 UTC m=+1457.443934359" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.175753 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.185204 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.470432 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.473764 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.508129 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657988 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.658099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760820 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760891 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760967 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.761003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762150 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.788845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.821011 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:45 crc kubenswrapper[4705]: I0216 15:17:45.407535 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.212886 4705 generic.go:334] "Generic (PLEG): container finished" podID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerID="6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d" exitCode=0 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.213052 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d"} Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.213662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerStarted","Data":"fd288e684e0a43e4b376cb33683431b8af354b638eab9d3f39fe75d11b79e614"} Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.640390 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.649220 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.649529 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" containerID="cri-o://18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651169 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" containerID="cri-o://e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651234 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" containerID="cri-o://5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651406 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" containerID="cri-o://f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.663430 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": EOF" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.664521 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.664622 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.209769 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.266911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerStarted","Data":"44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.268063 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.277621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.276938 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" exitCode=0 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.279750 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" exitCode=2 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.279821 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.280351 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" containerID="cri-o://66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" gracePeriod=30 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.280431 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" containerID="cri-o://3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" gracePeriod=30 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.306765 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" podStartSLOduration=3.306736153 podStartE2EDuration="3.306736153s" podCreationTimestamp="2026-02-16 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:47.295488606 +0000 UTC m=+1461.480465692" watchObservedRunningTime="2026-02-16 15:17:47.306736153 +0000 UTC m=+1461.491713229" Feb 16 15:17:47 crc kubenswrapper[4705]: E0216 15:17:47.794659 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda482712d_42ed_49b1_b0eb_fb1cf899f3db.slice/crio-66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc403fb44_6250_449b_b257_953b925c635a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda482712d_42ed_49b1_b0eb_fb1cf899f3db.slice/crio-conmon-66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc403fb44_6250_449b_b257_953b925c635a.slice/crio-conmon-a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-conmon-d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-conmon-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:47 crc kubenswrapper[4705]: E0216 15:17:47.795272 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-conmon-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.192118 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.295249 4705 generic.go:334] "Generic (PLEG): container finished" podID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" exitCode=143 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.295305 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.298250 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" exitCode=0 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.298272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.303716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.303998 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.304044 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.304080 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.311748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl" (OuterVolumeSpecName: "kube-api-access-fgfgl") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "kube-api-access-fgfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.312994 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" exitCode=137 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313080 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313189 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313218 4705 scope.go:117] "RemoveContainer" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.314614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts" (OuterVolumeSpecName: "scripts") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.407439 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.407491 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.477750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.485423 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data" (OuterVolumeSpecName: "config-data") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.510826 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.510895 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.550361 4705 scope.go:117] "RemoveContainer" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.583768 4705 scope.go:117] "RemoveContainer" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.613470 4705 scope.go:117] "RemoveContainer" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.663358 4705 scope.go:117] "RemoveContainer" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.663988 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": container with ID starting with 1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1 not found: ID does not exist" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.664044 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} err="failed to get container status \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": rpc error: code = NotFound desc = could not find container \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": container with ID starting with 1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1 not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.664079 4705 scope.go:117] "RemoveContainer" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.667673 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": container with ID starting with ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c not found: ID does not exist" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.667718 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} err="failed to get container status \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": rpc error: code = NotFound desc = could not find container \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": container with ID starting with ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.667753 4705 scope.go:117] "RemoveContainer" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.668039 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": container with ID starting with b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f not found: ID does not exist" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.668062 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} err="failed to get container status \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": rpc error: code = NotFound desc = could not find container \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": container with ID starting with b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.668079 4705 scope.go:117] "RemoveContainer" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.669187 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": container with ID starting with e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f not found: ID does not exist" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.669211 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} err="failed to get container status \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": rpc error: code = NotFound desc = could not find container \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": container with ID starting with e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.675579 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.709363 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.752715 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.754951 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.754982 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755016 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755024 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755067 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755075 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755133 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755140 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756439 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756479 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756535 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756564 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.765282 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.772048 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.773722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774740 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774969 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.811511 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.926962 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927044 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927459 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030499 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.036158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.037101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.037941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.041057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.055490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.072008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.100132 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.611470 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.615502 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.625744 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.695730 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:49 crc kubenswrapper[4705]: W0216 15:17:49.705642 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb1d6b3_1208_4339_9d67_330c02618823.slice/crio-eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0 WatchSource:0}: Error finding container eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0: Status 404 returned error can't find the container with id eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0 Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.752302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.752391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.753186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856716 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.857471 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.857595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.881091 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.948562 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370311 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" exitCode=0 Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370436 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370897 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.373161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.408387 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.446712 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" path="/var/lib/kubelet/pods/6d3bb879-c0d5-4b09-a454-034daa93ab77/volumes" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.578645 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580689 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580761 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580883 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.581040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.582930 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.585827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.588133 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts" (OuterVolumeSpecName: "scripts") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.593079 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq" (OuterVolumeSpecName: "kube-api-access-lhffq") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "kube-api-access-lhffq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.657798 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684498 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684539 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684550 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684564 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684574 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.715644 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.765657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data" (OuterVolumeSpecName: "config-data") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.786797 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.786829 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.964415 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.094537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.094721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.095100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.095231 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.096522 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs" (OuterVolumeSpecName: "logs") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.117750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv" (OuterVolumeSpecName: "kube-api-access-ntxgv") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "kube-api-access-ntxgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.155518 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data" (OuterVolumeSpecName: "config-data") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.164475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.198832 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199090 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199193 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199261 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390831 4705 generic.go:334] "Generic (PLEG): container finished" podID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" exitCode=0 Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390964 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.391482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"b9e16d20a34c818b351cfcb18e6ae185d36b1c587820242a2f7a8a4d81bd9408"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.391510 4705 scope.go:117] "RemoveContainer" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.397285 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"9d7d987f3057f6bfbf32a6e31f06eb31f7c7ba3db80a5d117b8e149f9352a0e4"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.397562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"4cb2380dba203b6c9018aaa81811f515ca7fcf6667eb1d9d862b6a3d11f9a192"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402544 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" exitCode=0 Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402685 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"e1adb33222027cc4f090326df3b9dd77bb0143da9f839682a1a04a68a2f7c1af"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.439520 4705 scope.go:117] "RemoveContainer" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.496055 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.522934 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.531870 4705 scope.go:117] "RemoveContainer" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.535311 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": container with ID starting with 3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8 not found: ID does not exist" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.535361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} err="failed to get container status \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": rpc error: code = NotFound desc = could not find container \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": container with ID starting with 3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8 not found: ID does not exist" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.535409 4705 scope.go:117] "RemoveContainer" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.544554 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": container with ID starting with 66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8 not found: ID does not exist" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.544624 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} err="failed to get container status \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": rpc error: code = NotFound desc = could not find container \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": container with ID starting with 66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8 not found: ID does not exist" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.555717 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.573446 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.615683 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620063 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620112 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620253 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620267 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620314 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620336 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620386 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620395 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620418 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620426 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620471 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621461 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621491 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621509 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621552 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621567 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.624672 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.631627 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.631715 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.632722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.640796 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.654236 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.665310 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.665710 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.670144 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.670269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.676141 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.676318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.678279 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.695737 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726656 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726718 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727159 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727453 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.833590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834231 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834569 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.836449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.843849 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.843906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.857569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.861961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.863627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.938647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.939763 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940515 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.941103 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.945481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.948260 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.950976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.953697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.954305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.957178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.969974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.002702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.479058 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" path="/var/lib/kubelet/pods/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8/volumes" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.480903 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" path="/var/lib/kubelet/pods/a482712d-42ed-49b1-b0eb-fb1cf899f3db/volumes" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.482002 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"a6f9dc04b61b9ef3151f79e7b43de5c5e596501dbdf9aa73754333bd3dfe7ac5"} Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.505463 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.692906 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.692941 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.718985 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.722791 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.728023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.728670 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.772232 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.783655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890740 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890999 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.900547 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:52 crc kubenswrapper[4705]: W0216 15:17:52.919952 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6977ac78_db27_460b_8a38_582c65dbb67b.slice/crio-634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6 WatchSource:0}: Error finding container 634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6: Status 404 returned error can't find the container with id 634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6 Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.996498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.997922 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.998063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.998253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.012069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.012986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.015636 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.016260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.186195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.513464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"aeb06779efac7c38585b17cfd3ae6968f2916d9ee186859b6bf4a5e6711bb96e"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.594019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"ea6a067c1817b0e280afbb42ee719194207bb37c3d0040c7caa0f8cda7c8399c"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.638234 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.705758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.705821 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.755042 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.683041777 podStartE2EDuration="5.755017647s" podCreationTimestamp="2026-02-16 15:17:48 +0000 UTC" firstStartedPulling="2026-02-16 15:17:49.708261877 +0000 UTC m=+1463.893238943" lastFinishedPulling="2026-02-16 15:17:52.780237727 +0000 UTC m=+1466.965214813" observedRunningTime="2026-02-16 15:17:53.678877402 +0000 UTC m=+1467.863854478" watchObservedRunningTime="2026-02-16 15:17:53.755017647 +0000 UTC m=+1467.939994723" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.240696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.718590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerStarted","Data":"eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.719046 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerStarted","Data":"e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.722774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.733094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.733162 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.742177 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-v596j" podStartSLOduration=2.742158846 podStartE2EDuration="2.742158846s" podCreationTimestamp="2026-02-16 15:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:54.737039051 +0000 UTC m=+1468.922016127" watchObservedRunningTime="2026-02-16 15:17:54.742158846 +0000 UTC m=+1468.927135922" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.774756 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.7747365630000003 podStartE2EDuration="3.774736563s" podCreationTimestamp="2026-02-16 15:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:54.770050061 +0000 UTC m=+1468.955027137" watchObservedRunningTime="2026-02-16 15:17:54.774736563 +0000 UTC m=+1468.959713629" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.823670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.935577 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.936270 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" containerID="cri-o://b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" gracePeriod=10 Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.682799 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722629 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.723043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.723111 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.762851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb" (OuterVolumeSpecName: "kube-api-access-2nxgb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "kube-api-access-2nxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.790525 4705 generic.go:334] "Generic (PLEG): container finished" podID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" exitCode=0 Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792019 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792689 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"ede06e3254a42f9f6eec0ac56c7e1b7e4b102971ccf37608944546f6accc4101"} Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792732 4705 scope.go:117] "RemoveContainer" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.834827 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.854666 4705 scope.go:117] "RemoveContainer" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.903927 4705 scope.go:117] "RemoveContainer" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: E0216 15:17:55.904653 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": container with ID starting with b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37 not found: ID does not exist" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.905106 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} err="failed to get container status \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": rpc error: code = NotFound desc = could not find container \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": container with ID starting with b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37 not found: ID does not exist" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.905139 4705 scope.go:117] "RemoveContainer" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: E0216 15:17:55.906484 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": container with ID starting with ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31 not found: ID does not exist" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.906514 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31"} err="failed to get container status \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": rpc error: code = NotFound desc = could not find container \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": container with ID starting with ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31 not found: ID does not exist" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.910606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.916199 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config" (OuterVolumeSpecName: "config") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.933581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942327 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942365 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942376 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.944252 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.953938 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.045219 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.045272 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.203469 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.221532 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.437373 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" path="/var/lib/kubelet/pods/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7/volumes" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.806769 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0"} Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.841590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba"} Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.841793 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.890738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.394750619 podStartE2EDuration="6.890704403s" podCreationTimestamp="2026-02-16 15:17:51 +0000 UTC" firstStartedPulling="2026-02-16 15:17:52.740262791 +0000 UTC m=+1466.925239867" lastFinishedPulling="2026-02-16 15:17:57.236216565 +0000 UTC m=+1471.421193651" observedRunningTime="2026-02-16 15:17:57.873788386 +0000 UTC m=+1472.058765462" watchObservedRunningTime="2026-02-16 15:17:57.890704403 +0000 UTC m=+1472.075681479" Feb 16 15:17:58 crc kubenswrapper[4705]: I0216 15:17:58.864991 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" exitCode=0 Feb 16 15:17:58 crc kubenswrapper[4705]: I0216 15:17:58.865087 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.881316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.909268 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jtvt" podStartSLOduration=3.034050216 podStartE2EDuration="10.909246878s" podCreationTimestamp="2026-02-16 15:17:49 +0000 UTC" firstStartedPulling="2026-02-16 15:17:51.439403054 +0000 UTC m=+1465.624380130" lastFinishedPulling="2026-02-16 15:17:59.314599696 +0000 UTC m=+1473.499576792" observedRunningTime="2026-02-16 15:17:59.903677701 +0000 UTC m=+1474.088654777" watchObservedRunningTime="2026-02-16 15:17:59.909246878 +0000 UTC m=+1474.094223954" Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.949140 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.949685 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.006402 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:01 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:01 crc kubenswrapper[4705]: > Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.671019 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.673432 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.677560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.913646 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerID="eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c" exitCode=0 Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.913762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerDied","Data":"eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c"} Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.921021 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.952220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.952309 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:02 crc kubenswrapper[4705]: I0216 15:18:02.967571 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:02 crc kubenswrapper[4705]: I0216 15:18:02.969206 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.562997 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668532 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668603 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668752 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.692938 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts" (OuterVolumeSpecName: "scripts") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.693202 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89" (OuterVolumeSpecName: "kube-api-access-4kc89") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "kube-api-access-4kc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.719564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.722705 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data" (OuterVolumeSpecName: "config-data") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772089 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772122 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772134 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772146 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.952410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.952848 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerDied","Data":"e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb"} Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.953076 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb" Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.170844 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.171103 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" containerID="cri-o://48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220070 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220352 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" containerID="cri-o://3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220911 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" containerID="cri-o://5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.244782 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967224 4705 generic.go:334] "Generic (PLEG): container finished" podID="6977ac78-db27-460b-8a38-582c65dbb67b" containerID="3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" exitCode=143 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4"} Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967847 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" containerID="cri-o://fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967897 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" containerID="cri-o://972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" gracePeriod=30 Feb 16 15:18:05 crc kubenswrapper[4705]: I0216 15:18:05.982777 4705 generic.go:334] "Generic (PLEG): container finished" podID="628e6201-a994-4614-9b4d-3f261b718186" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" exitCode=143 Feb 16 15:18:05 crc kubenswrapper[4705]: I0216 15:18:05.983261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.461056 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.461749 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.462080 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.462120 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.779489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894823 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.902178 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk" (OuterVolumeSpecName: "kube-api-access-wsrjk") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "kube-api-access-wsrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.940499 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.942442 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data" (OuterVolumeSpecName: "config-data") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:06.999923 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.000402 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.000423 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014004 4705 generic.go:334] "Generic (PLEG): container finished" podID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" exitCode=0 Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014062 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerDied","Data":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerDied","Data":"3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814"} Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014118 4705 scope.go:117] "RemoveContainer" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014345 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.076763 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.086979 4705 scope.go:117] "RemoveContainer" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.095631 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": container with ID starting with 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 not found: ID does not exist" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.095696 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} err="failed to get container status \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": rpc error: code = NotFound desc = could not find container \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": container with ID starting with 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 not found: ID does not exist" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.097470 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.112921 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113598 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="init" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113619 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="init" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113693 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113700 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113718 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113725 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113754 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114015 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114037 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114053 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.115107 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.117423 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.145874 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.218621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.219700 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.219912 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.330565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.331561 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.345010 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.437791 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.060517 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.110568 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:41636->10.217.0.254:8775: read: connection reset by peer" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.110993 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:41622->10.217.0.254:8775: read: connection reset by peer" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.442814 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" path="/var/lib/kubelet/pods/24dafc8c-fbe7-45cc-9558-fad23223b4d0/volumes" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.803859 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.884994 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886610 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886919 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.888678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs" (OuterVolumeSpecName: "logs") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.908115 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl" (OuterVolumeSpecName: "kube-api-access-t2xzl") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "kube-api-access-t2xzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.956570 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data" (OuterVolumeSpecName: "config-data") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.967978 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993441 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993746 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993856 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993913 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.020577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072051 4705 generic.go:334] "Generic (PLEG): container finished" podID="628e6201-a994-4614-9b4d-3f261b718186" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" exitCode=0 Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072119 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"eeec271298f4dcb2eb43a0a1c49fcdad72fcc161271d50f3ad69a11322b20f9c"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072173 4705 scope.go:117] "RemoveContainer" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072406 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.081761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67e0dd7-af17-4240-ab5a-b6c149913841","Type":"ContainerStarted","Data":"eb759fb6b2e21021de42c5ef8b41c6a6ff316da783d3073f3e00a48b6ad7b382"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.081813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67e0dd7-af17-4240-ab5a-b6c149913841","Type":"ContainerStarted","Data":"46142ee456045dcc700269dc212fada2e8ad6f9af585e9f2a3f2e6f01c476037"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.087580 4705 generic.go:334] "Generic (PLEG): container finished" podID="6977ac78-db27-460b-8a38-582c65dbb67b" containerID="5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" exitCode=0 Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.087629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.098940 4705 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.133592 4705 scope.go:117] "RemoveContainer" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.139068 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.13904697 podStartE2EDuration="2.13904697s" podCreationTimestamp="2026-02-16 15:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:09.117326358 +0000 UTC m=+1483.302303444" watchObservedRunningTime="2026-02-16 15:18:09.13904697 +0000 UTC m=+1483.324024046" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.164711 4705 scope.go:117] "RemoveContainer" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.165235 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": container with ID starting with 972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de not found: ID does not exist" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165269 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} err="failed to get container status \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": rpc error: code = NotFound desc = could not find container \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": container with ID starting with 972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de not found: ID does not exist" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165291 4705 scope.go:117] "RemoveContainer" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.165697 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": container with ID starting with fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e not found: ID does not exist" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165723 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} err="failed to get container status \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": rpc error: code = NotFound desc = could not find container \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": container with ID starting with fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e not found: ID does not exist" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.198963 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.241952 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.248893 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260019 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260778 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260800 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260840 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260849 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260863 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260869 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260879 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260885 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261137 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261158 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261181 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261198 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.262804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.264885 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.264979 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.285538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308302 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308410 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308729 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308884 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309270 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs" (OuterVolumeSpecName: "logs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309672 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309929 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309992 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.310096 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.314707 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh" (OuterVolumeSpecName: "kube-api-access-f8lhh") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "kube-api-access-f8lhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.355400 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data" (OuterVolumeSpecName: "config-data") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.359959 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.413500 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.414629 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.414706 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.415308 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.416260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.416877 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.417039 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419156 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419516 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419611 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419679 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.422262 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.425025 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.429041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.435041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.449417 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.522785 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.600880 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:10 crc kubenswrapper[4705]: W0216 15:18:10.092298 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode121221e_aecf_4425_bb78_e384ce98e73b.slice/crio-1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988 WatchSource:0}: Error finding container 1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988: Status 404 returned error can't find the container with id 1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988 Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.094174 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108246 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6"} Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108298 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108339 4705 scope.go:117] "RemoveContainer" containerID="5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.148580 4705 scope.go:117] "RemoveContainer" containerID="3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.154519 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.174130 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.193800 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.196547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209190 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209310 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209389 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.220965 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249849 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.250018 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.250083 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352786 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352874 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352922 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.353040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.358152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.358913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.359072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.361032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.361843 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.380157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.446143 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628e6201-a994-4614-9b4d-3f261b718186" path="/var/lib/kubelet/pods/628e6201-a994-4614-9b4d-3f261b718186/volumes" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.448884 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" path="/var/lib/kubelet/pods/6977ac78-db27-460b-8a38-582c65dbb67b/volumes" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.541544 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.013791 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:11 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:11 crc kubenswrapper[4705]: > Feb 16 15:18:11 crc kubenswrapper[4705]: W0216 15:18:11.122457 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3f98b0f_bb45_4942_81e0_68e6f2658df5.slice/crio-b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d WatchSource:0}: Error finding container b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d: Status 404 returned error can't find the container with id b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"80b053d99a4a239647c917dadc86268c20bab7e4733d84f70801e778283d19ee"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"605515fa5a1ce023877b35e7aca63570cea6d73ee46bdb734ebfe10778815ff4"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124804 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.125164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.151831 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.151814763 podStartE2EDuration="2.151814763s" podCreationTimestamp="2026-02-16 15:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:11.14106412 +0000 UTC m=+1485.326041196" watchObservedRunningTime="2026-02-16 15:18:11.151814763 +0000 UTC m=+1485.336791839" Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.147286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"175f9fe5c00efdc0e273ab22128eec8a1538b8d92d019a733175abba7df05320"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.148103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"e42615b7c1c1e4d0238110dcaeb523081af56ef70f114d18a4a80c8f964f6b6b"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.148125 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.174245 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.174222094 podStartE2EDuration="2.174222094s" podCreationTimestamp="2026-02-16 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:12.167492954 +0000 UTC m=+1486.352470030" watchObservedRunningTime="2026-02-16 15:18:12.174222094 +0000 UTC m=+1486.359199170" Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.438547 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:18:14 crc kubenswrapper[4705]: I0216 15:18:14.601232 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:18:14 crc kubenswrapper[4705]: I0216 15:18:14.602064 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:18:17 crc kubenswrapper[4705]: I0216 15:18:17.437975 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:18:17 crc kubenswrapper[4705]: I0216 15:18:17.488719 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:18:18 crc kubenswrapper[4705]: I0216 15:18:18.267663 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:18:19 crc kubenswrapper[4705]: I0216 15:18:19.601535 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:18:19 crc kubenswrapper[4705]: I0216 15:18:19.603243 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.542135 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.542599 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.621521 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e121221e-aecf-4425-bb78-e384ce98e73b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.621559 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e121221e-aecf-4425-bb78-e384ce98e73b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.016248 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:21 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:21 crc kubenswrapper[4705]: > Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.554568 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3f98b0f-bb45-4942-81e0-68e6f2658df5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.554622 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3f98b0f-bb45-4942-81e0-68e6f2658df5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:22 crc kubenswrapper[4705]: I0216 15:18:22.015800 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.244567 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.245524 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" containerID="cri-o://24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" gracePeriod=30 Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.320079 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.320359 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" containerID="cri-o://2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" gracePeriod=30 Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.880792 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.990966 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.991913 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.006673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl" (OuterVolumeSpecName: "kube-api-access-mdfvl") pod "bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" (UID: "bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0"). InnerVolumeSpecName "kube-api-access-mdfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.094532 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.094918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.095024 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.095987 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.099295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l" (OuterVolumeSpecName: "kube-api-access-bxz7l") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "kube-api-access-bxz7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.131675 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.161942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data" (OuterVolumeSpecName: "config-data") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198512 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198561 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198571 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359675 4705 generic.go:334] "Generic (PLEG): container finished" podID="683ef288-8b6e-4612-be52-d1654bd75098" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" exitCode=2 Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359741 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerDied","Data":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerDied","Data":"3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359797 4705 scope.go:117] "RemoveContainer" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359934 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369094 4705 generic.go:334] "Generic (PLEG): container finished" podID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" exitCode=2 Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerDied","Data":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369177 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369202 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerDied","Data":"75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.431433 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.433602 4705 scope.go:117] "RemoveContainer" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.433990 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": container with ID starting with 2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8 not found: ID does not exist" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.434032 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} err="failed to get container status \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": rpc error: code = NotFound desc = could not find container \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": container with ID starting with 2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8 not found: ID does not exist" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.434059 4705 scope.go:117] "RemoveContainer" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.449347 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.467265 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.474797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.474835 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.474884 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.474893 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.475529 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.475570 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.477485 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.484209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.486805 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.487023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.501267 4705 scope.go:117] "RemoveContainer" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507211 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507330 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507406 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507982 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.511107 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": container with ID starting with 24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95 not found: ID does not exist" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.511140 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} err="failed to get container status \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": rpc error: code = NotFound desc = could not find container \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": container with ID starting with 24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95 not found: ID does not exist" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.515926 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.540203 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.562184 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.566258 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.570068 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.570539 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.596182 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612601 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612726 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612754 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612898 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.620574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.639458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.639624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.644264 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715285 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.719522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.719752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.721457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.732322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.811010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.886499 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.435416 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683ef288-8b6e-4612-be52-d1654bd75098" path="/var/lib/kubelet/pods/683ef288-8b6e-4612-be52-d1654bd75098/volumes" Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.436470 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" path="/var/lib/kubelet/pods/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0/volumes" Feb 16 15:18:28 crc kubenswrapper[4705]: W0216 15:18:28.586908 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40e4f3a_57bb_45e6_997b_39ffc0e497d9.slice/crio-96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f WatchSource:0}: Error finding container 96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f: Status 404 returned error can't find the container with id 96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.589729 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.710275 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.192505 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.193162 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" containerID="cri-o://4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194065 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" containerID="cri-o://a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194234 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" containerID="cri-o://88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194356 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" containerID="cri-o://d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.407068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"db5e423c-e590-4e7b-913a-a0a10d55537d","Type":"ContainerStarted","Data":"f62cd2996483c851bc4686ba09550c79108adba85fa2dd0a75b2ef05f42146f5"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.412908 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d40e4f3a-57bb-45e6-997b-39ffc0e497d9","Type":"ContainerStarted","Data":"96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416531 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" exitCode=0 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416575 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" exitCode=2 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.606698 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.611109 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.618359 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.017095 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.074368 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.258147 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.434551 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" exitCode=0 Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.435223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d40e4f3a-57bb-45e6-997b-39ffc0e497d9","Type":"ContainerStarted","Data":"8fc5963eedb43a94bdfaff01f3a3d86e1c39c0b2e61c081e17a443fb532d6277"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.435268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.437184 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"db5e423c-e590-4e7b-913a-a0a10d55537d","Type":"ContainerStarted","Data":"150b3e6bd321ebd2a450168fa1d037631c9949d3b48fc77d7d9938a205d6fdaa"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.444661 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.450960 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.927973685 podStartE2EDuration="3.45093896s" podCreationTimestamp="2026-02-16 15:18:27 +0000 UTC" firstStartedPulling="2026-02-16 15:18:28.590623933 +0000 UTC m=+1502.775600999" lastFinishedPulling="2026-02-16 15:18:29.113589198 +0000 UTC m=+1503.298566274" observedRunningTime="2026-02-16 15:18:30.448591533 +0000 UTC m=+1504.633568619" watchObservedRunningTime="2026-02-16 15:18:30.45093896 +0000 UTC m=+1504.635916036" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.540431 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.074858676 podStartE2EDuration="3.540403794s" podCreationTimestamp="2026-02-16 15:18:27 +0000 UTC" firstStartedPulling="2026-02-16 15:18:28.712629979 +0000 UTC m=+1502.897607055" lastFinishedPulling="2026-02-16 15:18:29.178175097 +0000 UTC m=+1503.363152173" observedRunningTime="2026-02-16 15:18:30.523795794 +0000 UTC m=+1504.708772870" watchObservedRunningTime="2026-02-16 15:18:30.540403794 +0000 UTC m=+1504.725380880" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.553315 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.554955 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.562220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.568498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.448560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.448631 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.449090 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" containerID="cri-o://cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" gracePeriod=2 Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.457657 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.683869 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.684260 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.064047 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.214611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.214958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.215156 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.216440 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities" (OuterVolumeSpecName: "utilities") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.222845 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t" (OuterVolumeSpecName: "kube-api-access-fmf2t") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "kube-api-access-fmf2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.319328 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.319377 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.350082 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.421573 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465344 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" exitCode=0 Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465465 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465477 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"e1adb33222027cc4f090326df3b9dd77bb0143da9f839682a1a04a68a2f7c1af"} Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465576 4705 scope.go:117] "RemoveContainer" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.499814 4705 scope.go:117] "RemoveContainer" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.508806 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.522821 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.524635 4705 scope.go:117] "RemoveContainer" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.591645 4705 scope.go:117] "RemoveContainer" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.592398 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": container with ID starting with cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a not found: ID does not exist" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592448 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} err="failed to get container status \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": rpc error: code = NotFound desc = could not find container \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": container with ID starting with cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a not found: ID does not exist" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592478 4705 scope.go:117] "RemoveContainer" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.592877 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": container with ID starting with b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda not found: ID does not exist" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592938 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} err="failed to get container status \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": rpc error: code = NotFound desc = could not find container \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": container with ID starting with b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda not found: ID does not exist" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592987 4705 scope.go:117] "RemoveContainer" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.593363 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": container with ID starting with c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4 not found: ID does not exist" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.593406 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4"} err="failed to get container status \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": rpc error: code = NotFound desc = could not find container \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": container with ID starting with c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4 not found: ID does not exist" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.500681 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" exitCode=0 Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.501224 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc"} Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.720757 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.868949 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869009 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869137 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869232 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869426 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.870489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.870803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.876968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts" (OuterVolumeSpecName: "scripts") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.899188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt" (OuterVolumeSpecName: "kube-api-access-lb6vt") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "kube-api-access-lb6vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.922627 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.974158 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.975420 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.975586 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.976221 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.976517 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.981014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.040451 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data" (OuterVolumeSpecName: "config-data") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.079683 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.079716 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.463773 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" path="/var/lib/kubelet/pods/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3/volumes" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.531749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"aeb06779efac7c38585b17cfd3ae6968f2916d9ee186859b6bf4a5e6711bb96e"} Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.531838 4705 scope.go:117] "RemoveContainer" containerID="a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.532071 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.577592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.599279 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.622691 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623321 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-content" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623339 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-content" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623352 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623365 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623421 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623438 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623492 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623512 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-utilities" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623521 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-utilities" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623540 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623549 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623575 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623583 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623870 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623890 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623906 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623922 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623939 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.627402 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632031 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632711 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632891 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.665473 4705 scope.go:117] "RemoveContainer" containerID="88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.684450 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.693447 4705 scope.go:117] "RemoveContainer" containerID="d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701153 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.725965 4705 scope.go:117] "RemoveContainer" containerID="4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804898 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804968 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.806752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.806981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.811821 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.813270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.814673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.816238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.819168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.833752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.955253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.511930 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.530273 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.553182 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"b14abde630b2bee0d5eb1b3685bd917b7fcb2ae39f9d9939adcb84271012d464"} Feb 16 15:18:36 crc kubenswrapper[4705]: I0216 15:18:36.445017 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" path="/var/lib/kubelet/pods/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc/volumes" Feb 16 15:18:36 crc kubenswrapper[4705]: I0216 15:18:36.580328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53"} Feb 16 15:18:37 crc kubenswrapper[4705]: I0216 15:18:37.620655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45"} Feb 16 15:18:37 crc kubenswrapper[4705]: I0216 15:18:37.895709 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:18:39 crc kubenswrapper[4705]: I0216 15:18:39.655782 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9"} Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.676057 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f"} Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.676485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.710572 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.23204635 podStartE2EDuration="6.710491782s" podCreationTimestamp="2026-02-16 15:18:34 +0000 UTC" firstStartedPulling="2026-02-16 15:18:35.529710936 +0000 UTC m=+1509.714688052" lastFinishedPulling="2026-02-16 15:18:40.008156398 +0000 UTC m=+1514.193133484" observedRunningTime="2026-02-16 15:18:40.699079968 +0000 UTC m=+1514.884057044" watchObservedRunningTime="2026-02-16 15:18:40.710491782 +0000 UTC m=+1514.895468858" Feb 16 15:19:01 crc kubenswrapper[4705]: I0216 15:19:01.683854 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:01 crc kubenswrapper[4705]: I0216 15:19:01.684493 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:04 crc kubenswrapper[4705]: I0216 15:19:04.969123 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.668934 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.775674 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.831767 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.837826 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.886217 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960630 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960745 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.063899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.066015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.066336 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.071308 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.072400 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.083969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.182269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.794070 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976264 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976696 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976882 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.978123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.200183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-d9lbf" event={"ID":"09e6dd23-2e83-460f-b42f-885bf7af0214","Type":"ContainerStarted","Data":"418278f1cc47aacacb7fcac2908486e492493310ac4701393b7de2a51d8dc824"} Feb 16 15:19:18 crc kubenswrapper[4705]: E0216 15:19:18.202902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.224614 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.433818 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" path="/var/lib/kubelet/pods/72538f80-8a9f-451f-9653-4f1faeec593c/volumes" Feb 16 15:19:19 crc kubenswrapper[4705]: E0216 15:19:19.214924 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.290962 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.423491 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424031 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" containerID="cri-o://7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424064 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" containerID="cri-o://91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424106 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" containerID="cri-o://641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.425286 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" containerID="cri-o://a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" gracePeriod=30 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245155 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245206 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" exitCode=2 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245218 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245227 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245258 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53"} Feb 16 15:19:20 crc kubenswrapper[4705]: E0216 15:19:20.384778 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9c5323_a947_4c1b_ac75_ae64fd17a7a8.slice/crio-conmon-a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9c5323_a947_4c1b_ac75_ae64fd17a7a8.slice/crio-a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.781897 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871798 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871904 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872154 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872820 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.873537 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.874028 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.874055 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.883730 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g" (OuterVolumeSpecName: "kube-api-access-zp98g") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "kube-api-access-zp98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.898924 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts" (OuterVolumeSpecName: "scripts") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.970842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978271 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978307 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978337 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.996207 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.056782 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.080876 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.080913 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.140643 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data" (OuterVolumeSpecName: "config-data") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.184524 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.258799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"b14abde630b2bee0d5eb1b3685bd917b7fcb2ae39f9d9939adcb84271012d464"} Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.258890 4705 scope.go:117] "RemoveContainer" containerID="641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.259156 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.307454 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.322675 4705 scope.go:117] "RemoveContainer" containerID="7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.329424 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.343773 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344599 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344621 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344640 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344646 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344681 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344688 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344695 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344703 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344950 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344967 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344981 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344992 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.347414 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.351318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.351427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.352617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.356420 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.370743 4705 scope.go:117] "RemoveContainer" containerID="91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392742 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392970 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.393032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.393053 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.424510 4705 scope.go:117] "RemoveContainer" containerID="a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496687 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496881 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496997 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.497022 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.497198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.500127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.501005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.505530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.506048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.506806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.507330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.507506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.520299 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.673663 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.247220 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.287088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"e38ea5175f250f4c1e5be4639893d0d75a4d0e0b967d1621c26438a4d0f3cb21"} Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354200 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354289 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354545 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.435600 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" path="/var/lib/kubelet/pods/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8/volumes" Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.300569 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"e7aa3da3d6c30bd5a32a8afa1f687a1d814d7de856ca8413e867c53f3f8d407f"} Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.568932 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" containerID="cri-o://a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" gracePeriod=604795 Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.734997 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 15:19:24 crc kubenswrapper[4705]: I0216 15:19:24.316071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"74c6e75428ac9c8870fd387cf77f7813e11fdba438b2629b37ad9589d37dca29"} Feb 16 15:19:24 crc kubenswrapper[4705]: I0216 15:19:24.925669 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" containerID="cri-o://9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" gracePeriod=604795 Feb 16 15:19:25 crc kubenswrapper[4705]: E0216 15:19:25.904421 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:26 crc kubenswrapper[4705]: I0216 15:19:26.353808 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"a7d39367f686cd15b7f8f95563076f8b9c94da472429a2bca19c7cb952502e12"} Feb 16 15:19:26 crc kubenswrapper[4705]: I0216 15:19:26.354355 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:19:26 crc kubenswrapper[4705]: E0216 15:19:26.357436 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:27 crc kubenswrapper[4705]: E0216 15:19:27.370248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:29 crc kubenswrapper[4705]: I0216 15:19:29.508882 4705 scope.go:117] "RemoveContainer" containerID="6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e" Feb 16 15:19:29 crc kubenswrapper[4705]: I0216 15:19:29.570045 4705 scope.go:117] "RemoveContainer" containerID="02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.387999 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402602 4705 generic.go:334] "Generic (PLEG): container finished" podID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" exitCode=0 Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"ba74fdfcb7efec48976e7232011d375059db8616337cd4b51be00bbb131415c9"} Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402742 4705 scope.go:117] "RemoveContainer" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402746 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.467087 4705 scope.go:117] "RemoveContainer" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.597147 4705 scope.go:117] "RemoveContainer" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.599280 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": container with ID starting with a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641 not found: ID does not exist" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.599344 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} err="failed to get container status \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": rpc error: code = NotFound desc = could not find container \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": container with ID starting with a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641 not found: ID does not exist" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.599393 4705 scope.go:117] "RemoveContainer" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.599968 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": container with ID starting with 3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523 not found: ID does not exist" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.600019 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} err="failed to get container status \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": rpc error: code = NotFound desc = could not find container \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": container with ID starting with 3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523 not found: ID does not exist" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.623711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.624735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.624969 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625007 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625034 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625882 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629388 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629454 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629528 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629583 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629781 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.631321 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.633354 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.633516 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.638553 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.640029 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info" (OuterVolumeSpecName: "pod-info") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.647931 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.648282 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.651475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb" (OuterVolumeSpecName: "kube-api-access-vrknb") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "kube-api-access-vrknb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.671911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data" (OuterVolumeSpecName: "config-data") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.677990 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e podName:f6b410b5-951c-43d2-b846-3fef02ec0f7f nodeName:}" failed. No retries permitted until 2026-02-16 15:19:31.177958837 +0000 UTC m=+1565.362935913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685401 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685497 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685732 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.686872 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.733382 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf" (OuterVolumeSpecName: "server-conf") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736401 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736425 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736438 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736448 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736457 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736467 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736477 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.809657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.840179 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.251836 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.281560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e" (OuterVolumeSpecName: "persistence") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "pvc-49db22ca-5365-4dcc-af52-2ea57a09051e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.355787 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") on node \"crc\" " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.356246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.379051 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.411489 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.412738 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.412781 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.412878 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.412888 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.413891 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.416008 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.438977 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.450263 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.450502 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49db22ca-5365-4dcc-af52-2ea57a09051e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e") on node "crc" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.456874 4705 generic.go:334] "Generic (PLEG): container finished" podID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerID="9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" exitCode=0 Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.456946 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f"} Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.460170 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564525 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565390 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669580 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669749 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.670357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.671131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.671645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.673183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.676621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678477 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678676 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.679140 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e04bcb153e3e04f037e1fc841d6f137a96f2052e5c7d3319ec9bf09db685a60/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.679587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684424 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684493 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684541 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685761 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685824 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" gracePeriod=600 Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.700187 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.752781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.775769 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.839324 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.850880 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.869290 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.870121 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870139 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.870156 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870163 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870393 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.873734 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.886889 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.937128 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.987758 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.988277 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.988379 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992474 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992528 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992572 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992610 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992652 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992749 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992811 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993326 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.996176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.996740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.000827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.002534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.009642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp" (OuterVolumeSpecName: "kube-api-access-gfwxp") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "kube-api-access-gfwxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.018493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.018567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info" (OuterVolumeSpecName: "pod-info") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.041533 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a" (OuterVolumeSpecName: "persistence") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.078941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data" (OuterVolumeSpecName: "config-data") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098131 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.099048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100991 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101234 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101721 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101810 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101872 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101928 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101986 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102047 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102186 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") on node \"crc\" " Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102251 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.107756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.107918 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.110796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.125048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.131421 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf" (OuterVolumeSpecName: "server-conf") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.208196 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.217275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.218823 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.219021 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a") on node "crc" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.241883 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.317610 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.317670 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.438447 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" path="/var/lib/kubelet/pods/f6b410b5-951c-43d2-b846-3fef02ec0f7f/volumes" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490042 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"9536c4826f2994651344a9956c3c00d2cb404777160d90908e2937cd52e8fb5f"} Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490319 4705 scope.go:117] "RemoveContainer" containerID="9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.496284 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" exitCode=0 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.496335 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.497535 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:19:32 crc kubenswrapper[4705]: E0216 15:19:32.497896 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.572127 4705 scope.go:117] "RemoveContainer" containerID="663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.572730 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.615344 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: W0216 15:19:32.634710 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3671c78_83d9_45b6_a869_d08abfa12906.slice/crio-96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6 WatchSource:0}: Error finding container 96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6: Status 404 returned error can't find the container with id 96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.638296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.660525 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.664275 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.667095 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668440 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668610 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jzl8w" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668866 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669143 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669311 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669764 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739432 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739475 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739549 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739663 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739693 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.753791 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.792952 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.801639 4705 scope.go:117] "RemoveContainer" containerID="99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" Feb 16 15:19:32 crc kubenswrapper[4705]: W0216 15:19:32.809154 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8684c18_9b3b_468c_b055_c6bbc838aba7.slice/crio-119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601 WatchSource:0}: Error finding container 119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601: Status 404 returned error can't find the container with id 119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.842804 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843575 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.844969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.846762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.846815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.847577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.848485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858279 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858322 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15fddb9283d0361ec376f6d3697b3a7dae141e971c813fd76f875f1c98aad2dc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858922 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.866608 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.868698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.870495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.875035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.915524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.164201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.517598 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522063 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" exitCode=0 Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerStarted","Data":"119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.694604 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:33 crc kubenswrapper[4705]: W0216 15:19:33.707988 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35504e73_1115_4e30_8ef7_95e85f31eaf6.slice/crio-78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a WatchSource:0}: Error finding container 78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a: Status 404 returned error can't find the container with id 78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.436190 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" path="/var/lib/kubelet/pods/070373d6-b0bd-43e2-bdf5-ca300875e65d/volumes" Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.538021 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerStarted","Data":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.538364 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.540643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a"} Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.566962 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" podStartSLOduration=3.56694215 podStartE2EDuration="3.56694215s" podCreationTimestamp="2026-02-16 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:19:34.559610702 +0000 UTC m=+1568.744587778" watchObservedRunningTime="2026-02-16 15:19:34.56694215 +0000 UTC m=+1568.751919226" Feb 16 15:19:35 crc kubenswrapper[4705]: I0216 15:19:35.558360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac"} Feb 16 15:19:36 crc kubenswrapper[4705]: I0216 15:19:36.577799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea"} Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.549641 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.554584 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.564526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661399 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764675 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.765109 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.791212 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.886049 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:40 crc kubenswrapper[4705]: I0216 15:19:40.441538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:40 crc kubenswrapper[4705]: W0216 15:19:40.449102 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf747da3_b6aa_42f8_8339_fd2189d24bd0.slice/crio-9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3 WatchSource:0}: Error finding container 9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3: Status 404 returned error can't find the container with id 9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3 Feb 16 15:19:40 crc kubenswrapper[4705]: I0216 15:19:40.666314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerStarted","Data":"9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3"} Feb 16 15:19:41 crc kubenswrapper[4705]: I0216 15:19:41.681308 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" exitCode=0 Feb 16 15:19:41 crc kubenswrapper[4705]: I0216 15:19:41.681385 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26"} Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.275563 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.358800 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.359066 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" containerID="cri-o://44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" gracePeriod=10 Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.459587 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568386 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568455 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568622 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.578071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.628667 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.641706 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.641822 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.706523 4705 generic.go:334] "Generic (PLEG): container finished" podID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerID="44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" exitCode=0 Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.707861 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7"} Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.710105 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.799863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800142 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903377 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903424 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903474 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903532 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.904744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.905281 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.905792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.907249 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.907598 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.908093 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.931319 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.986084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.219997 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325632 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325845 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325867 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325914 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.326917 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.327457 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.332662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr" (OuterVolumeSpecName: "kube-api-access-5z7sr") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "kube-api-access-5z7sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.337291 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.449587 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.450798 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.450897 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config" (OuterVolumeSpecName: "config") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.464320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.466825 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545619 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545672 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545687 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545701 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545713 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.555855 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.722825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerStarted","Data":"5da80254409d1f5702b9b50ca3cf24d99fa5621b6bbfa7fd535c598b1f8d5c4c"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.725442 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" exitCode=0 Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.725499 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.731634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"fd288e684e0a43e4b376cb33683431b8af354b638eab9d3f39fe75d11b79e614"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.731694 4705 scope.go:117] "RemoveContainer" containerID="44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.732813 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.776900 4705 scope.go:117] "RemoveContainer" containerID="6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.808193 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.824620 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.435432 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" path="/var/lib/kubelet/pods/33cb0a6c-7599-4301-b7f4-630b9ccfdf42/volumes" Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.750014 4705 generic.go:334] "Generic (PLEG): container finished" podID="414f383c-09a6-4895-81cc-e12f73391831" containerID="a4367fb47635f9d5624022d97a599f3c7e514c4f22ebb280fe343935e0e53ac2" exitCode=0 Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.750075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerDied","Data":"a4367fb47635f9d5624022d97a599f3c7e514c4f22ebb280fe343935e0e53ac2"} Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.754925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerStarted","Data":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.825464 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkkhj" podStartSLOduration=3.373897167 podStartE2EDuration="5.825443802s" podCreationTimestamp="2026-02-16 15:19:39 +0000 UTC" firstStartedPulling="2026-02-16 15:19:41.68455024 +0000 UTC m=+1575.869527316" lastFinishedPulling="2026-02-16 15:19:44.136096875 +0000 UTC m=+1578.321073951" observedRunningTime="2026-02-16 15:19:44.822060856 +0000 UTC m=+1579.007037942" watchObservedRunningTime="2026-02-16 15:19:44.825443802 +0000 UTC m=+1579.010420898" Feb 16 15:19:45 crc kubenswrapper[4705]: E0216 15:19:45.442259 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:45 crc kubenswrapper[4705]: I0216 15:19:45.769117 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerStarted","Data":"5dc53870a6819e03dc212784d395a4a5c246cb7933c229fdea896abac87855f2"} Feb 16 15:19:45 crc kubenswrapper[4705]: I0216 15:19:45.795385 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" podStartSLOduration=3.795347276 podStartE2EDuration="3.795347276s" podCreationTimestamp="2026-02-16 15:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:19:45.793323319 +0000 UTC m=+1579.978300405" watchObservedRunningTime="2026-02-16 15:19:45.795347276 +0000 UTC m=+1579.980324352" Feb 16 15:19:46 crc kubenswrapper[4705]: I0216 15:19:46.429433 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:19:46 crc kubenswrapper[4705]: E0216 15:19:46.429760 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:46 crc kubenswrapper[4705]: I0216 15:19:46.792527 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.886351 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.887292 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.943013 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:50 crc kubenswrapper[4705]: I0216 15:19:50.935034 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:51 crc kubenswrapper[4705]: I0216 15:19:51.041632 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:52 crc kubenswrapper[4705]: I0216 15:19:52.887465 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkkhj" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" containerID="cri-o://873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" gracePeriod=2 Feb 16 15:19:52 crc kubenswrapper[4705]: I0216 15:19:52.987977 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.104221 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.104729 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" containerID="cri-o://e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" gracePeriod=10 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.689457 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788235 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.789498 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities" (OuterVolumeSpecName: "utilities") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.798129 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576" (OuterVolumeSpecName: "kube-api-access-5l576") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "kube-api-access-5l576". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.812842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.822525 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.890888 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.890991 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891124 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891242 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891442 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891496 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892280 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892299 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892313 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.901035 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5" (OuterVolumeSpecName: "kube-api-access-fmtx5") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "kube-api-access-fmtx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906623 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" exitCode=0 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906709 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906792 4705 scope.go:117] "RemoveContainer" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.907176 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910464 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" exitCode=0 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910616 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.962902 4705 scope.go:117] "RemoveContainer" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.964844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.995621 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.995677 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.027603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.040194 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.054898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.067329 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.069230 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.080661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.097018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config" (OuterVolumeSpecName: "config") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.097988 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:54 crc kubenswrapper[4705]: W0216 15:19:54.098226 4705 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d8684c18-9b3b-468c-b055-c6bbc838aba7/volumes/kubernetes.io~configmap/config Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config" (OuterVolumeSpecName: "config") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098854 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098880 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098894 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098905 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098913 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.220227 4705 scope.go:117] "RemoveContainer" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287010 4705 scope.go:117] "RemoveContainer" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.287557 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": container with ID starting with 873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd not found: ID does not exist" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287592 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} err="failed to get container status \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": rpc error: code = NotFound desc = could not find container \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": container with ID starting with 873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287614 4705 scope.go:117] "RemoveContainer" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.287819 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": container with ID starting with 0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b not found: ID does not exist" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287839 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b"} err="failed to get container status \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": rpc error: code = NotFound desc = could not find container \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": container with ID starting with 0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287854 4705 scope.go:117] "RemoveContainer" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.288182 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": container with ID starting with 793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26 not found: ID does not exist" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.288310 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26"} err="failed to get container status \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": rpc error: code = NotFound desc = could not find container \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": container with ID starting with 793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26 not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.288449 4705 scope.go:117] "RemoveContainer" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.299356 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.311444 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.343689 4705 scope.go:117] "RemoveContainer" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372012 4705 scope.go:117] "RemoveContainer" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.372864 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": container with ID starting with e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa not found: ID does not exist" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372920 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} err="failed to get container status \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": rpc error: code = NotFound desc = could not find container \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": container with ID starting with e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372956 4705 scope.go:117] "RemoveContainer" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.373439 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": container with ID starting with a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76 not found: ID does not exist" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.373464 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76"} err="failed to get container status \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": rpc error: code = NotFound desc = could not find container \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": container with ID starting with a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76 not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.434948 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" path="/var/lib/kubelet/pods/cf747da3-b6aa-42f8-8339-fd2189d24bd0/volumes" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.435718 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" path="/var/lib/kubelet/pods/d8684c18-9b3b-468c-b055-c6bbc838aba7/volumes" Feb 16 15:19:58 crc kubenswrapper[4705]: E0216 15:19:58.430209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554265 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554715 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554879 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.556188 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:00 crc kubenswrapper[4705]: I0216 15:20:00.420580 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:00 crc kubenswrapper[4705]: E0216 15:20:00.421273 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.329215 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331436 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331468 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331495 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331507 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331519 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331527 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331544 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331552 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331569 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-content" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-content" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331598 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-utilities" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331606 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-utilities" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331631 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331639 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332018 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332033 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332045 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.333781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.336150 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.336295 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.337588 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.348735 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.371683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460267 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564107 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564211 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.576165 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.576287 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.578930 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.585737 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.693885 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.138583 4705 generic.go:334] "Generic (PLEG): container finished" podID="f3671c78-83d9-45b6-a869-d08abfa12906" containerID="c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac" exitCode=0 Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.138662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerDied","Data":"c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac"} Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.141152 4705 generic.go:334] "Generic (PLEG): container finished" podID="35504e73-1115-4e30-8ef7-95e85f31eaf6" containerID="fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea" exitCode=0 Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.141192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerDied","Data":"fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea"} Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.401420 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.157206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"2b5d4c63816c241b28d9efa0f3d9ef3b166de1720523b905fc916740d660f255"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.157939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.165783 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"6853c50e72d1c1a33aaeee2eb79f064dad0a0023f92687c42b1df2057faad392"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.166146 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.169160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerStarted","Data":"adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.197740 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.197719952 podStartE2EDuration="38.197719952s" podCreationTimestamp="2026-02-16 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:20:09.19129352 +0000 UTC m=+1603.376270616" watchObservedRunningTime="2026-02-16 15:20:09.197719952 +0000 UTC m=+1603.382697028" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.246597 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.246570406 podStartE2EDuration="37.246570406s" podCreationTimestamp="2026-02-16 15:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:20:09.226919379 +0000 UTC m=+1603.411896455" watchObservedRunningTime="2026-02-16 15:20:09.246570406 +0000 UTC m=+1603.431547482" Feb 16 15:20:11 crc kubenswrapper[4705]: E0216 15:20:11.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:12 crc kubenswrapper[4705]: I0216 15:20:12.421064 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:12 crc kubenswrapper[4705]: E0216 15:20:12.421658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550307 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550390 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550585 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.551774 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:20 crc kubenswrapper[4705]: I0216 15:20:20.330492 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerStarted","Data":"1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72"} Feb 16 15:20:20 crc kubenswrapper[4705]: I0216 15:20:20.356535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" podStartSLOduration=2.302403037 podStartE2EDuration="13.356513726s" podCreationTimestamp="2026-02-16 15:20:07 +0000 UTC" firstStartedPulling="2026-02-16 15:20:08.409793752 +0000 UTC m=+1602.594770818" lastFinishedPulling="2026-02-16 15:20:19.463904421 +0000 UTC m=+1613.648881507" observedRunningTime="2026-02-16 15:20:20.350254038 +0000 UTC m=+1614.535231124" watchObservedRunningTime="2026-02-16 15:20:20.356513726 +0000 UTC m=+1614.541490802" Feb 16 15:20:21 crc kubenswrapper[4705]: I0216 15:20:21.781651 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 15:20:21 crc kubenswrapper[4705]: I0216 15:20:21.885709 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:23 crc kubenswrapper[4705]: I0216 15:20:23.167636 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:20:23 crc kubenswrapper[4705]: I0216 15:20:23.419706 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:23 crc kubenswrapper[4705]: E0216 15:20:23.420081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:24 crc kubenswrapper[4705]: E0216 15:20:24.422946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:25 crc kubenswrapper[4705]: E0216 15:20:25.422053 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:27 crc kubenswrapper[4705]: I0216 15:20:27.078185 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" containerID="cri-o://eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" gracePeriod=604795 Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.746675 4705 scope.go:117] "RemoveContainer" containerID="eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.783564 4705 scope.go:117] "RemoveContainer" containerID="0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.882131 4705 scope.go:117] "RemoveContainer" containerID="338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.941294 4705 scope.go:117] "RemoveContainer" containerID="b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.978815 4705 scope.go:117] "RemoveContainer" containerID="f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.014935 4705 scope.go:117] "RemoveContainer" containerID="e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.103051 4705 scope.go:117] "RemoveContainer" containerID="cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.145032 4705 scope.go:117] "RemoveContainer" containerID="f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.459902 4705 generic.go:334] "Generic (PLEG): container finished" podID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerID="1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72" exitCode=0 Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.459980 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerDied","Data":"1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72"} Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.064397 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203800 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203952 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.215651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.229176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx" (OuterVolumeSpecName: "kube-api-access-mbnkx") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "kube-api-access-mbnkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.244240 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory" (OuterVolumeSpecName: "inventory") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.250955 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308825 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308913 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308931 4705 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308948 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerDied","Data":"adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0"} Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540658 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540725 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.657457 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:32 crc kubenswrapper[4705]: E0216 15:20:32.658104 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.658125 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.658335 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.659252 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670178 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670632 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.674041 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.691982 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727977 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.830292 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.830899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.831005 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.839627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.841133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.850261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.018935 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.572402 4705 generic.go:334] "Generic (PLEG): container finished" podID="139788ad-b160-4139-a6af-094e33c581e5" containerID="eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" exitCode=0 Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.572899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce"} Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.683841 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.751309 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.770864 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771293 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.772907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.772957 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.773042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.773646 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.775548 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.789844 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.789989 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790126 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790339 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790383 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.793220 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794871 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794910 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794924 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.848993 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.849109 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9" (OuterVolumeSpecName: "kube-api-access-tfsp9") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "kube-api-access-tfsp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.853104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0" (OuterVolumeSpecName: "persistence") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.854617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.876950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data" (OuterVolumeSpecName: "config-data") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899047 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899091 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899108 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899152 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") on node \"crc\" " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899169 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899182 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.933332 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.973879 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.974119 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0") on node "crc" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.006122 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.006179 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.047439 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.109241 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.590351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerStarted","Data":"30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.590887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerStarted","Data":"32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"ad93a17a230e0f89ffb728c848e626d65cc868f03d8c72f03802d0c82854159a"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593345 4705 scope.go:117] "RemoveContainer" containerID="eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593492 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.635410 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" podStartSLOduration=2.2049889990000002 podStartE2EDuration="2.63538559s" podCreationTimestamp="2026-02-16 15:20:32 +0000 UTC" firstStartedPulling="2026-02-16 15:20:33.695333911 +0000 UTC m=+1627.880310997" lastFinishedPulling="2026-02-16 15:20:34.125730512 +0000 UTC m=+1628.310707588" observedRunningTime="2026-02-16 15:20:34.61491952 +0000 UTC m=+1628.799896606" watchObservedRunningTime="2026-02-16 15:20:34.63538559 +0000 UTC m=+1628.820362666" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.647263 4705 scope.go:117] "RemoveContainer" containerID="c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.663767 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.687335 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.705506 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: E0216 15:20:34.706270 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="setup-container" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706289 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="setup-container" Feb 16 15:20:34 crc kubenswrapper[4705]: E0216 15:20:34.706351 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706359 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706659 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.708200 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.718445 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830627 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830705 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830950 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.831021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934057 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934109 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934176 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934280 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934417 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934459 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.935472 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.936009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.937133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.937797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.938063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945341 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.947929 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.947979 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75a91b98174d7040097f89a93bfd5946d971fbacf68f20932d87234b8e73eca0/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.961309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.021408 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.031207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: E0216 15:20:35.424041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.642904 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.420123 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:36 crc kubenswrapper[4705]: E0216 15:20:36.420813 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.447656 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139788ad-b160-4139-a6af-094e33c581e5" path="/var/lib/kubelet/pods/139788ad-b160-4139-a6af-094e33c581e5/volumes" Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.623029 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"9252c67adb26afbb27ee35987fff52022c14379f791402a435b29a668b7d4162"} Feb 16 15:20:37 crc kubenswrapper[4705]: E0216 15:20:37.421626 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:37 crc kubenswrapper[4705]: I0216 15:20:37.643860 4705 generic.go:334] "Generic (PLEG): container finished" podID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerID="30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e" exitCode=0 Feb 16 15:20:37 crc kubenswrapper[4705]: I0216 15:20:37.643946 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerDied","Data":"30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e"} Feb 16 15:20:38 crc kubenswrapper[4705]: I0216 15:20:38.661401 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9"} Feb 16 15:20:38 crc kubenswrapper[4705]: I0216 15:20:38.703285 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: i/o timeout" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.276270 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.403541 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.403986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.404093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.417133 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n" (OuterVolumeSpecName: "kube-api-access-hgt6n") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "kube-api-access-hgt6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.449172 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.459175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory" (OuterVolumeSpecName: "inventory") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509410 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509447 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509458 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.694135 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.695500 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerDied","Data":"32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14"} Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.695553 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.783202 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:39 crc kubenswrapper[4705]: E0216 15:20:39.783974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.783995 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.784284 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.785403 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.788011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791358 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791482 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791383 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.806812 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922252 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922765 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026833 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026914 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.027038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.031872 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.032691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.047056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.050169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.112852 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.980640 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:40 crc kubenswrapper[4705]: W0216 15:20:40.983064 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6ba4a0_6ae7_42c6_9d27_cb62696d2c85.slice/crio-431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405 WatchSource:0}: Error finding container 431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405: Status 404 returned error can't find the container with id 431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405 Feb 16 15:20:41 crc kubenswrapper[4705]: I0216 15:20:41.723291 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerStarted","Data":"431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405"} Feb 16 15:20:43 crc kubenswrapper[4705]: I0216 15:20:43.755407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerStarted","Data":"e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449"} Feb 16 15:20:43 crc kubenswrapper[4705]: I0216 15:20:43.785071 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" podStartSLOduration=3.03453318 podStartE2EDuration="4.785039541s" podCreationTimestamp="2026-02-16 15:20:39 +0000 UTC" firstStartedPulling="2026-02-16 15:20:40.988219857 +0000 UTC m=+1635.173196943" lastFinishedPulling="2026-02-16 15:20:42.738726218 +0000 UTC m=+1636.923703304" observedRunningTime="2026-02-16 15:20:43.774338478 +0000 UTC m=+1637.959315554" watchObservedRunningTime="2026-02-16 15:20:43.785039541 +0000 UTC m=+1637.970016647" Feb 16 15:20:47 crc kubenswrapper[4705]: E0216 15:20:47.424923 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570040 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570464 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570622 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.572699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:50 crc kubenswrapper[4705]: I0216 15:20:50.420070 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:50 crc kubenswrapper[4705]: E0216 15:20:50.421161 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.539153 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.539945 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.540090 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.541847 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:02 crc kubenswrapper[4705]: E0216 15:21:02.424920 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:03 crc kubenswrapper[4705]: I0216 15:21:03.420714 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:03 crc kubenswrapper[4705]: E0216 15:21:03.421476 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:10 crc kubenswrapper[4705]: E0216 15:21:10.424012 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:11 crc kubenswrapper[4705]: I0216 15:21:11.145883 4705 generic.go:334] "Generic (PLEG): container finished" podID="3e86fa10-e583-4f86-97f5-e95ec2c9e9e0" containerID="4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9" exitCode=0 Feb 16 15:21:11 crc kubenswrapper[4705]: I0216 15:21:11.145998 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerDied","Data":"4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9"} Feb 16 15:21:12 crc kubenswrapper[4705]: I0216 15:21:12.162265 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"2f9967b51caa448c77442bfa47901aa5cb2237ddbe6775da90e5595999d18128"} Feb 16 15:21:12 crc kubenswrapper[4705]: I0216 15:21:12.163051 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 15:21:14 crc kubenswrapper[4705]: E0216 15:21:14.423446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:14 crc kubenswrapper[4705]: I0216 15:21:14.456417 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.456396313 podStartE2EDuration="40.456396313s" podCreationTimestamp="2026-02-16 15:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:21:12.194162935 +0000 UTC m=+1666.379140031" watchObservedRunningTime="2026-02-16 15:21:14.456396313 +0000 UTC m=+1668.641373399" Feb 16 15:21:17 crc kubenswrapper[4705]: I0216 15:21:17.420226 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:17 crc kubenswrapper[4705]: E0216 15:21:17.421292 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:23 crc kubenswrapper[4705]: E0216 15:21:23.425306 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:25 crc kubenswrapper[4705]: I0216 15:21:25.035035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 15:21:25 crc kubenswrapper[4705]: I0216 15:21:25.121517 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:28 crc kubenswrapper[4705]: E0216 15:21:28.429730 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.161141 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" containerID="cri-o://ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" gracePeriod=604795 Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.562517 4705 scope.go:117] "RemoveContainer" containerID="7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.611254 4705 scope.go:117] "RemoveContainer" containerID="72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.659322 4705 scope.go:117] "RemoveContainer" containerID="baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" Feb 16 15:21:32 crc kubenswrapper[4705]: I0216 15:21:32.421166 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:32 crc kubenswrapper[4705]: E0216 15:21:32.422339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:33 crc kubenswrapper[4705]: I0216 15:21:33.336122 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.482725 4705 generic.go:334] "Generic (PLEG): container finished" podID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerID="ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" exitCode=0 Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.483313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30"} Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.919421 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.096478 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.096991 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097179 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097391 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097417 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098027 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098278 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.105285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.109297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.110886 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.117102 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.123929 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.152135 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c" (OuterVolumeSpecName: "persistence") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.162884 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.164454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j" (OuterVolumeSpecName: "kube-api-access-pd25j") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "kube-api-access-pd25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.193928 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data" (OuterVolumeSpecName: "config-data") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205406 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205444 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205456 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205490 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") on node \"crc\" " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205503 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205514 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205522 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205532 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205542 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.218976 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.243116 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.243295 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c") on node "crc" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.295176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308569 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308616 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308630 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.498901 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"c10aeda896c97ab2b56b22cb8e034aaa58126bfac49a954b06a32ef9f4316ccc"} Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.498967 4705 scope.go:117] "RemoveContainer" containerID="ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.499007 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.541193 4705 scope.go:117] "RemoveContainer" containerID="86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.550419 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.566098 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.642563 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: E0216 15:21:37.643536 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.643557 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: E0216 15:21:37.643576 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="setup-container" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.643582 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="setup-container" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.646298 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.654588 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.660611 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728543 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728595 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729313 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729431 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832553 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832611 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832646 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832779 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832819 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.833938 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.834240 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.835024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.835855 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.836787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.840202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.841163 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.848986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866050 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866479 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866898 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866927 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6913a5af6e0b901f5e41cc9da5820d3446361504ddf8a58e3143477836427e51/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.980169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.994662 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:38 crc kubenswrapper[4705]: E0216 15:21:38.422925 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:38 crc kubenswrapper[4705]: I0216 15:21:38.434430 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" path="/var/lib/kubelet/pods/3ba19f15-a399-4d4b-bf32-a2a870a660e5/volumes" Feb 16 15:21:38 crc kubenswrapper[4705]: I0216 15:21:38.515069 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:39 crc kubenswrapper[4705]: I0216 15:21:39.527231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"73c4bd308c39c7d0431f11bc6afcb72243dfef0a42d552bb8c6fdc299c566e41"} Feb 16 15:21:41 crc kubenswrapper[4705]: E0216 15:21:41.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:41 crc kubenswrapper[4705]: I0216 15:21:41.552341 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc"} Feb 16 15:21:47 crc kubenswrapper[4705]: I0216 15:21:47.420861 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:47 crc kubenswrapper[4705]: E0216 15:21:47.422402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:52 crc kubenswrapper[4705]: E0216 15:21:52.435150 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:53 crc kubenswrapper[4705]: E0216 15:21:53.421317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:58 crc kubenswrapper[4705]: I0216 15:21:58.421160 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:58 crc kubenswrapper[4705]: E0216 15:21:58.422318 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:05 crc kubenswrapper[4705]: E0216 15:22:05.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:06 crc kubenswrapper[4705]: E0216 15:22:06.435576 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:11 crc kubenswrapper[4705]: I0216 15:22:11.420709 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:11 crc kubenswrapper[4705]: E0216 15:22:11.422761 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:14 crc kubenswrapper[4705]: I0216 15:22:14.062708 4705 generic.go:334] "Generic (PLEG): container finished" podID="af0e4de4-5af4-4d5c-b2c4-963771612f94" containerID="dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc" exitCode=0 Feb 16 15:22:14 crc kubenswrapper[4705]: I0216 15:22:14.062791 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerDied","Data":"dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc"} Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.079764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"641c1288c2d276fed0c1ca32e80eec0e24c5856c3ff63e7450bb313b86eeca4b"} Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.080907 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.125519 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.125483677 podStartE2EDuration="38.125483677s" podCreationTimestamp="2026-02-16 15:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:22:15.116095211 +0000 UTC m=+1729.301072307" watchObservedRunningTime="2026-02-16 15:22:15.125483677 +0000 UTC m=+1729.310460763" Feb 16 15:22:17 crc kubenswrapper[4705]: E0216 15:22:17.423904 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544547 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544650 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544825 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.546008 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:24 crc kubenswrapper[4705]: I0216 15:22:24.420853 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:24 crc kubenswrapper[4705]: E0216 15:22:24.422655 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:28 crc kubenswrapper[4705]: I0216 15:22:27.999628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539269 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539358 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539535 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.540702 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:30 crc kubenswrapper[4705]: E0216 15:22:30.425390 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:39 crc kubenswrapper[4705]: I0216 15:22:39.420796 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:39 crc kubenswrapper[4705]: E0216 15:22:39.422576 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:42 crc kubenswrapper[4705]: E0216 15:22:42.425011 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:45 crc kubenswrapper[4705]: E0216 15:22:45.423032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:52 crc kubenswrapper[4705]: I0216 15:22:52.420614 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:52 crc kubenswrapper[4705]: E0216 15:22:52.422130 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:53 crc kubenswrapper[4705]: I0216 15:22:53.062093 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:22:53 crc kubenswrapper[4705]: I0216 15:22:53.076895 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.065091 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.080464 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.098979 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.113563 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.126197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.140425 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.153429 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.180445 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.204128 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.223970 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.439587 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" path="/var/lib/kubelet/pods/3f443bcd-c93f-4b89-a048-cc92f28f5854/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.442701 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" path="/var/lib/kubelet/pods/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.444762 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a486f037-5709-4199-9f76-0cb0c995af25" path="/var/lib/kubelet/pods/a486f037-5709-4199-9f76-0cb0c995af25/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.446198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2232806-cac7-4787-839b-9bcecac93820" path="/var/lib/kubelet/pods/b2232806-cac7-4787-839b-9bcecac93820/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.449128 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" path="/var/lib/kubelet/pods/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.450732 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37b9312-710d-49b4-8cc7-3956df176627" path="/var/lib/kubelet/pods/f37b9312-710d-49b4-8cc7-3956df176627/volumes" Feb 16 15:22:55 crc kubenswrapper[4705]: E0216 15:22:55.424003 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:56 crc kubenswrapper[4705]: E0216 15:22:56.435403 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.059272 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.077876 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.095775 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.106688 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.117669 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.138592 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.420382 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:04 crc kubenswrapper[4705]: E0216 15:23:04.420774 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.455973 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" path="/var/lib/kubelet/pods/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7/volumes" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.466396 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" path="/var/lib/kubelet/pods/45a2df1c-b87d-4765-b900-e6b165802be2/volumes" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.468635 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" path="/var/lib/kubelet/pods/5c5de6a8-c858-4f91-8833-e012562ee1a3/volumes" Feb 16 15:23:05 crc kubenswrapper[4705]: I0216 15:23:05.044845 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:23:05 crc kubenswrapper[4705]: I0216 15:23:05.064865 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:23:06 crc kubenswrapper[4705]: I0216 15:23:06.436882 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" path="/var/lib/kubelet/pods/3c074c5c-fae9-49f3-8139-adb92b649951/volumes" Feb 16 15:23:07 crc kubenswrapper[4705]: E0216 15:23:07.424333 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:10 crc kubenswrapper[4705]: E0216 15:23:10.423579 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:15 crc kubenswrapper[4705]: I0216 15:23:15.042010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:23:15 crc kubenswrapper[4705]: I0216 15:23:15.056031 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:23:16 crc kubenswrapper[4705]: I0216 15:23:16.435953 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" path="/var/lib/kubelet/pods/b68c2080-dd84-406b-ba19-b4cdd136c90e/volumes" Feb 16 15:23:18 crc kubenswrapper[4705]: I0216 15:23:18.420423 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:18 crc kubenswrapper[4705]: E0216 15:23:18.421837 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:20 crc kubenswrapper[4705]: E0216 15:23:20.422444 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:21 crc kubenswrapper[4705]: E0216 15:23:21.423027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.420339 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:30 crc kubenswrapper[4705]: E0216 15:23:30.422290 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.894776 4705 scope.go:117] "RemoveContainer" containerID="8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.947452 4705 scope.go:117] "RemoveContainer" containerID="48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.029973 4705 scope.go:117] "RemoveContainer" containerID="e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.065735 4705 scope.go:117] "RemoveContainer" containerID="e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.098167 4705 scope.go:117] "RemoveContainer" containerID="5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.168853 4705 scope.go:117] "RemoveContainer" containerID="2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.213775 4705 scope.go:117] "RemoveContainer" containerID="5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.280468 4705 scope.go:117] "RemoveContainer" containerID="55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.446555 4705 scope.go:117] "RemoveContainer" containerID="0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.494597 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerID="e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449" exitCode=0 Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.494685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerDied","Data":"e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449"} Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.545917 4705 scope.go:117] "RemoveContainer" containerID="d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.570891 4705 scope.go:117] "RemoveContainer" containerID="f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.592283 4705 scope.go:117] "RemoveContainer" containerID="931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.621174 4705 scope.go:117] "RemoveContainer" containerID="e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.654912 4705 scope.go:117] "RemoveContainer" containerID="a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.676957 4705 scope.go:117] "RemoveContainer" containerID="dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.708618 4705 scope.go:117] "RemoveContainer" containerID="18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.741514 4705 scope.go:117] "RemoveContainer" containerID="637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.764633 4705 scope.go:117] "RemoveContainer" containerID="56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.785563 4705 scope.go:117] "RemoveContainer" containerID="65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92" Feb 16 15:23:32 crc kubenswrapper[4705]: E0216 15:23:32.423570 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.030262 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.119651 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.119839 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.120230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.120341 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.129023 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.129784 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt" (OuterVolumeSpecName: "kube-api-access-bjfnt") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "kube-api-access-bjfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.162870 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory" (OuterVolumeSpecName: "inventory") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.173478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228120 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228204 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228237 4705 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228263 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.553992 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerDied","Data":"431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405"} Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.554910 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.554132 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.681292 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:33 crc kubenswrapper[4705]: E0216 15:23:33.682043 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.682067 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.682316 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.683447 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.690503 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.693870 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.694907 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.695311 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.697562 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.745567 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.746127 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.746303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.854474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.854749 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.879713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:34 crc kubenswrapper[4705]: I0216 15:23:34.013489 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:34 crc kubenswrapper[4705]: E0216 15:23:34.425434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:34 crc kubenswrapper[4705]: I0216 15:23:34.671759 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.589164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerStarted","Data":"28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba"} Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.589737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerStarted","Data":"1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b"} Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.626842 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" podStartSLOduration=2.155345768 podStartE2EDuration="2.626805763s" podCreationTimestamp="2026-02-16 15:23:33 +0000 UTC" firstStartedPulling="2026-02-16 15:23:34.677589904 +0000 UTC m=+1808.862566990" lastFinishedPulling="2026-02-16 15:23:35.149049899 +0000 UTC m=+1809.334026985" observedRunningTime="2026-02-16 15:23:35.610615413 +0000 UTC m=+1809.795592489" watchObservedRunningTime="2026-02-16 15:23:35.626805763 +0000 UTC m=+1809.811782869" Feb 16 15:23:37 crc kubenswrapper[4705]: I0216 15:23:37.067130 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:23:37 crc kubenswrapper[4705]: I0216 15:23:37.082238 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:23:38 crc kubenswrapper[4705]: I0216 15:23:38.447851 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" path="/var/lib/kubelet/pods/1eba064a-3f7c-4395-beca-1b77b85e1a29/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.045187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.063499 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.087099 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.099144 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.112101 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.123498 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.140401 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.156488 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.173986 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.192105 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.443791 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" path="/var/lib/kubelet/pods/00962490-7e63-4ba2-95e5-d95167d392bd/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.446947 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" path="/var/lib/kubelet/pods/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.449974 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" path="/var/lib/kubelet/pods/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.450777 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" path="/var/lib/kubelet/pods/ae5e7e5c-9868-457d-872b-ec1d3f34449a/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.452210 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" path="/var/lib/kubelet/pods/f5b60553-5a29-4222-ad99-2f33cedd3879/volumes" Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.055394 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.076592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.091795 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.102969 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.113526 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.123437 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.441931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" path="/var/lib/kubelet/pods/104ec45d-e95d-40c0-80a8-d59de9e2d45a/volumes" Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.445407 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" path="/var/lib/kubelet/pods/601c1c55-db3a-443a-bd6b-7d76e884697c/volumes" Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.449748 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" path="/var/lib/kubelet/pods/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d/volumes" Feb 16 15:23:44 crc kubenswrapper[4705]: I0216 15:23:44.419903 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:44 crc kubenswrapper[4705]: E0216 15:23:44.420535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:45 crc kubenswrapper[4705]: E0216 15:23:45.424967 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:46 crc kubenswrapper[4705]: E0216 15:23:46.429778 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.056460 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.070145 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.441997 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65b4384-a678-4002-9583-7f89082af14a" path="/var/lib/kubelet/pods/d65b4384-a678-4002-9583-7f89082af14a/volumes" Feb 16 15:23:58 crc kubenswrapper[4705]: E0216 15:23:58.425414 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:59 crc kubenswrapper[4705]: I0216 15:23:59.421708 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:59 crc kubenswrapper[4705]: E0216 15:23:59.422860 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:59 crc kubenswrapper[4705]: E0216 15:23:59.424504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:10 crc kubenswrapper[4705]: I0216 15:24:10.422853 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:10 crc kubenswrapper[4705]: E0216 15:24:10.423762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:24:10 crc kubenswrapper[4705]: E0216 15:24:10.426482 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:12 crc kubenswrapper[4705]: E0216 15:24:12.422688 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:21 crc kubenswrapper[4705]: I0216 15:24:21.071935 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:24:21 crc kubenswrapper[4705]: I0216 15:24:21.089145 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:24:22 crc kubenswrapper[4705]: I0216 15:24:22.420088 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:22 crc kubenswrapper[4705]: E0216 15:24:22.422030 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:24:22 crc kubenswrapper[4705]: I0216 15:24:22.437047 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baaef700-c962-494f-bee0-67990bf8bd84" path="/var/lib/kubelet/pods/baaef700-c962-494f-bee0-67990bf8bd84/volumes" Feb 16 15:24:24 crc kubenswrapper[4705]: E0216 15:24:24.426809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:26 crc kubenswrapper[4705]: E0216 15:24:26.436633 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.047732 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.064678 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.434707 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" path="/var/lib/kubelet/pods/e652b8a2-fe79-4cdc-b376-c4bc0b85197f/volumes" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.231496 4705 scope.go:117] "RemoveContainer" containerID="ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.271106 4705 scope.go:117] "RemoveContainer" containerID="bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.330885 4705 scope.go:117] "RemoveContainer" containerID="15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.420265 4705 scope.go:117] "RemoveContainer" containerID="be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.479153 4705 scope.go:117] "RemoveContainer" containerID="2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.564283 4705 scope.go:117] "RemoveContainer" containerID="264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.605280 4705 scope.go:117] "RemoveContainer" containerID="018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.628781 4705 scope.go:117] "RemoveContainer" containerID="9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.663258 4705 scope.go:117] "RemoveContainer" containerID="2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.705147 4705 scope.go:117] "RemoveContainer" containerID="01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.752107 4705 scope.go:117] "RemoveContainer" containerID="3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.785273 4705 scope.go:117] "RemoveContainer" containerID="e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55" Feb 16 15:24:33 crc kubenswrapper[4705]: I0216 15:24:33.421257 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:34 crc kubenswrapper[4705]: I0216 15:24:34.450244 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} Feb 16 15:24:35 crc kubenswrapper[4705]: I0216 15:24:35.047412 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:24:35 crc kubenswrapper[4705]: I0216 15:24:35.065617 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:24:36 crc kubenswrapper[4705]: I0216 15:24:36.450201 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" path="/var/lib/kubelet/pods/eeee3c96-5da7-42eb-9fd9-07a5f09182d5/volumes" Feb 16 15:24:37 crc kubenswrapper[4705]: E0216 15:24:37.425492 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:39 crc kubenswrapper[4705]: E0216 15:24:39.424979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:47 crc kubenswrapper[4705]: I0216 15:24:47.054404 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:24:47 crc kubenswrapper[4705]: I0216 15:24:47.074611 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.044391 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.056509 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.441963 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302aee2f-61be-439f-a04e-356243bb65b6" path="/var/lib/kubelet/pods/302aee2f-61be-439f-a04e-356243bb65b6/volumes" Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.443015 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" path="/var/lib/kubelet/pods/ddb24908-6026-4fe7-81b6-345402c9398e/volumes" Feb 16 15:24:49 crc kubenswrapper[4705]: E0216 15:24:49.422694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:54 crc kubenswrapper[4705]: E0216 15:24:54.423902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:03 crc kubenswrapper[4705]: I0216 15:25:03.422895 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.519901 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.519966 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.520108 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.521952 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:06 crc kubenswrapper[4705]: E0216 15:25:06.435667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.556305 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.556911 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.557059 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.558445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:19 crc kubenswrapper[4705]: E0216 15:25:19.422528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:31 crc kubenswrapper[4705]: E0216 15:25:31.423666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:32 crc kubenswrapper[4705]: E0216 15:25:32.421679 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.058510 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.072360 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.082485 4705 scope.go:117] "RemoveContainer" containerID="a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.125073 4705 scope.go:117] "RemoveContainer" containerID="99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.183967 4705 scope.go:117] "RemoveContainer" containerID="c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.038033 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.048340 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.058237 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.068460 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.433303 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" path="/var/lib/kubelet/pods/38af35f6-7590-41c4-9442-ec89fe02106f/volumes" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.434245 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" path="/var/lib/kubelet/pods/3c6fc941-1576-4817-859a-6644349bc8cd/volumes" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.435564 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" path="/var/lib/kubelet/pods/c18d067a-2ef1-4b11-936f-aef7f7910a80/volumes" Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.045710 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.070448 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.103044 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.129449 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.157436 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.175611 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.434777 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" path="/var/lib/kubelet/pods/6a0302cb-f7dd-46d4-8df0-2ab25bddec10/volumes" Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.437290 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" path="/var/lib/kubelet/pods/7b2a0a9c-1379-457e-a5e2-537304cfdcff/volumes" Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.438278 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" path="/var/lib/kubelet/pods/8b468686-b5ab-423d-a720-a2c77aed457f/volumes" Feb 16 15:25:42 crc kubenswrapper[4705]: E0216 15:25:42.422562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:46 crc kubenswrapper[4705]: E0216 15:25:46.430855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:57 crc kubenswrapper[4705]: E0216 15:25:57.424426 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:00 crc kubenswrapper[4705]: E0216 15:26:00.436490 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:10 crc kubenswrapper[4705]: E0216 15:26:10.423466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:14 crc kubenswrapper[4705]: E0216 15:26:14.426306 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.071193 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.087578 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.445172 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" path="/var/lib/kubelet/pods/06284688-bd14-48ff-adf1-d0dc441d1238/volumes" Feb 16 15:26:22 crc kubenswrapper[4705]: E0216 15:26:22.423759 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:28 crc kubenswrapper[4705]: E0216 15:26:28.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.377831 4705 scope.go:117] "RemoveContainer" containerID="5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.423037 4705 scope.go:117] "RemoveContainer" containerID="b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.498153 4705 scope.go:117] "RemoveContainer" containerID="fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.558706 4705 scope.go:117] "RemoveContainer" containerID="e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.619026 4705 scope.go:117] "RemoveContainer" containerID="624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.682357 4705 scope.go:117] "RemoveContainer" containerID="85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.763404 4705 scope.go:117] "RemoveContainer" containerID="8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c" Feb 16 15:26:34 crc kubenswrapper[4705]: E0216 15:26:34.423202 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:39 crc kubenswrapper[4705]: E0216 15:26:39.423524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.066669 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.084029 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.097301 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.108590 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.450998 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" path="/var/lib/kubelet/pods/481dd88a-36b9-432c-9d21-9221f5e98e6e/volumes" Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.451945 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" path="/var/lib/kubelet/pods/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2/volumes" Feb 16 15:26:46 crc kubenswrapper[4705]: E0216 15:26:46.435088 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:52 crc kubenswrapper[4705]: E0216 15:26:52.423273 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:59 crc kubenswrapper[4705]: I0216 15:26:59.057654 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:26:59 crc kubenswrapper[4705]: I0216 15:26:59.072394 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:26:59 crc kubenswrapper[4705]: E0216 15:26:59.422878 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:00 crc kubenswrapper[4705]: I0216 15:27:00.440097 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" path="/var/lib/kubelet/pods/bf60aeda-83a7-4d56-95a6-c390c2d08b8a/volumes" Feb 16 15:27:01 crc kubenswrapper[4705]: I0216 15:27:01.685106 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:27:01 crc kubenswrapper[4705]: I0216 15:27:01.685411 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:27:03 crc kubenswrapper[4705]: E0216 15:27:03.424662 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:10 crc kubenswrapper[4705]: E0216 15:27:10.423075 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:17 crc kubenswrapper[4705]: E0216 15:27:17.422292 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.046772 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.063442 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.071217 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.081938 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.436929 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" path="/var/lib/kubelet/pods/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993/volumes" Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.437979 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" path="/var/lib/kubelet/pods/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8/volumes" Feb 16 15:27:21 crc kubenswrapper[4705]: E0216 15:27:21.422587 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:30 crc kubenswrapper[4705]: E0216 15:27:30.422554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:31 crc kubenswrapper[4705]: I0216 15:27:31.686032 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:27:31 crc kubenswrapper[4705]: I0216 15:27:31.686473 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:27:33 crc kubenswrapper[4705]: I0216 15:27:33.969751 4705 scope.go:117] "RemoveContainer" containerID="014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.019730 4705 scope.go:117] "RemoveContainer" containerID="550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.060729 4705 scope.go:117] "RemoveContainer" containerID="156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.135125 4705 scope.go:117] "RemoveContainer" containerID="5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.191304 4705 scope.go:117] "RemoveContainer" containerID="c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5" Feb 16 15:27:36 crc kubenswrapper[4705]: E0216 15:27:36.433329 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:42 crc kubenswrapper[4705]: E0216 15:27:42.423462 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:50 crc kubenswrapper[4705]: E0216 15:27:50.423595 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:53 crc kubenswrapper[4705]: E0216 15:27:53.422352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.684832 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.685526 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.685595 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.687317 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.687499 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" gracePeriod=600 Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.588679 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" exitCode=0 Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.588749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.589273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.589296 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:28:03 crc kubenswrapper[4705]: I0216 15:28:03.050864 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:28:03 crc kubenswrapper[4705]: I0216 15:28:03.063777 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:28:04 crc kubenswrapper[4705]: E0216 15:28:04.460323 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:04 crc kubenswrapper[4705]: I0216 15:28:04.471554 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" path="/var/lib/kubelet/pods/7d98759e-f50f-4b94-bd6a-8cfa1e083675/volumes" Feb 16 15:28:08 crc kubenswrapper[4705]: E0216 15:28:08.421804 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:16 crc kubenswrapper[4705]: E0216 15:28:16.446699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:19 crc kubenswrapper[4705]: E0216 15:28:19.423412 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.839301 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.843141 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.856300 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905415 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905480 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010613 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.011179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.011184 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.048379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.177488 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: W0216 15:28:28.717817 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db7ee89_5367_4ead_bd1d_bcae066db67d.slice/crio-d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b WatchSource:0}: Error finding container d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b: Status 404 returned error can't find the container with id d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.727819 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.946472 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.946526 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b"} Feb 16 15:28:29 crc kubenswrapper[4705]: I0216 15:28:29.961666 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" exitCode=0 Feb 16 15:28:29 crc kubenswrapper[4705]: I0216 15:28:29.961733 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} Feb 16 15:28:30 crc kubenswrapper[4705]: E0216 15:28:30.422525 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:31 crc kubenswrapper[4705]: I0216 15:28:31.989209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} Feb 16 15:28:33 crc kubenswrapper[4705]: E0216 15:28:33.420812 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:34 crc kubenswrapper[4705]: I0216 15:28:34.378425 4705 scope.go:117] "RemoveContainer" containerID="eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c" Feb 16 15:28:36 crc kubenswrapper[4705]: I0216 15:28:36.032236 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" exitCode=0 Feb 16 15:28:36 crc kubenswrapper[4705]: I0216 15:28:36.032318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} Feb 16 15:28:37 crc kubenswrapper[4705]: I0216 15:28:37.052068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} Feb 16 15:28:37 crc kubenswrapper[4705]: I0216 15:28:37.082566 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddjpg" podStartSLOduration=3.462428564 podStartE2EDuration="10.082521911s" podCreationTimestamp="2026-02-16 15:28:27 +0000 UTC" firstStartedPulling="2026-02-16 15:28:29.965563957 +0000 UTC m=+2104.150541043" lastFinishedPulling="2026-02-16 15:28:36.585657314 +0000 UTC m=+2110.770634390" observedRunningTime="2026-02-16 15:28:37.071742987 +0000 UTC m=+2111.256720073" watchObservedRunningTime="2026-02-16 15:28:37.082521911 +0000 UTC m=+2111.267498987" Feb 16 15:28:38 crc kubenswrapper[4705]: I0216 15:28:38.178010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:38 crc kubenswrapper[4705]: I0216 15:28:38.178486 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:39 crc kubenswrapper[4705]: I0216 15:28:39.239977 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" probeResult="failure" output=< Feb 16 15:28:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:28:39 crc kubenswrapper[4705]: > Feb 16 15:28:43 crc kubenswrapper[4705]: E0216 15:28:43.423142 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:48 crc kubenswrapper[4705]: E0216 15:28:48.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:49 crc kubenswrapper[4705]: I0216 15:28:49.227955 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" probeResult="failure" output=< Feb 16 15:28:49 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:28:49 crc kubenswrapper[4705]: > Feb 16 15:28:54 crc kubenswrapper[4705]: E0216 15:28:54.422564 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:58 crc kubenswrapper[4705]: I0216 15:28:58.259748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:58 crc kubenswrapper[4705]: I0216 15:28:58.319183 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.051237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.307642 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" containerID="cri-o://42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" gracePeriod=2 Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.802017 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840607 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.841494 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities" (OuterVolumeSpecName: "utilities") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.841768 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.873851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq" (OuterVolumeSpecName: "kube-api-access-fclxq") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "kube-api-access-fclxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.944064 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.986755 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.046743 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320116 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" exitCode=0 Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320175 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320212 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b"} Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320239 4705 scope.go:117] "RemoveContainer" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.358151 4705 scope.go:117] "RemoveContainer" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.374738 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.382507 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.399123 4705 scope.go:117] "RemoveContainer" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.441740 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" path="/var/lib/kubelet/pods/1db7ee89-5367-4ead-bd1d-bcae066db67d/volumes" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443006 4705 scope.go:117] "RemoveContainer" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.443699 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": container with ID starting with 42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982 not found: ID does not exist" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443747 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} err="failed to get container status \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": rpc error: code = NotFound desc = could not find container \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": container with ID starting with 42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982 not found: ID does not exist" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443777 4705 scope.go:117] "RemoveContainer" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.444261 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": container with ID starting with c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363 not found: ID does not exist" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.444331 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} err="failed to get container status \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": rpc error: code = NotFound desc = could not find container \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": container with ID starting with c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363 not found: ID does not exist" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.444404 4705 scope.go:117] "RemoveContainer" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.445084 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": container with ID starting with cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5 not found: ID does not exist" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.445255 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} err="failed to get container status \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": rpc error: code = NotFound desc = could not find container \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": container with ID starting with cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5 not found: ID does not exist" Feb 16 15:29:02 crc kubenswrapper[4705]: E0216 15:29:02.422145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:06 crc kubenswrapper[4705]: E0216 15:29:06.428465 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:17 crc kubenswrapper[4705]: E0216 15:29:17.423278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:20 crc kubenswrapper[4705]: E0216 15:29:20.421577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.248280 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249625 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-content" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249639 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-content" Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249658 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249665 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249708 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-utilities" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249716 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-utilities" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.264365 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.266610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.266712 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303801 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.407153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.407298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.428450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.593319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.108560 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583446 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" exitCode=0 Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff"} Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583875 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"c1cd38255b851b38bfdf2fa0e752842971171b977b22335433117f4a4d1e8923"} Feb 16 15:29:27 crc kubenswrapper[4705]: I0216 15:29:27.597008 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} Feb 16 15:29:28 crc kubenswrapper[4705]: I0216 15:29:28.608324 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" exitCode=0 Feb 16 15:29:28 crc kubenswrapper[4705]: I0216 15:29:28.608405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} Feb 16 15:29:29 crc kubenswrapper[4705]: I0216 15:29:29.620416 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} Feb 16 15:29:29 crc kubenswrapper[4705]: I0216 15:29:29.652084 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qz8rs" podStartSLOduration=2.2214376590000002 podStartE2EDuration="4.652062237s" podCreationTimestamp="2026-02-16 15:29:25 +0000 UTC" firstStartedPulling="2026-02-16 15:29:26.587341988 +0000 UTC m=+2160.772319064" lastFinishedPulling="2026-02-16 15:29:29.017966566 +0000 UTC m=+2163.202943642" observedRunningTime="2026-02-16 15:29:29.639387439 +0000 UTC m=+2163.824364525" watchObservedRunningTime="2026-02-16 15:29:29.652062237 +0000 UTC m=+2163.837039303" Feb 16 15:29:31 crc kubenswrapper[4705]: E0216 15:29:31.421833 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:32 crc kubenswrapper[4705]: E0216 15:29:32.421453 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.623649 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.629497 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.639827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710000 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812639 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.813500 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.813577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.832394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.950679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:33 crc kubenswrapper[4705]: I0216 15:29:33.472907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:33 crc kubenswrapper[4705]: I0216 15:29:33.666834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"3ad7e362a5d5fec61f0b51b0a86fc6db1eddbaabfe14cce7548c481ee1985bf8"} Feb 16 15:29:34 crc kubenswrapper[4705]: I0216 15:29:34.679175 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" exitCode=0 Feb 16 15:29:34 crc kubenswrapper[4705]: I0216 15:29:34.679256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd"} Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.593743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.593985 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.664785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.771213 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:36 crc kubenswrapper[4705]: I0216 15:29:36.712501 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} Feb 16 15:29:36 crc kubenswrapper[4705]: I0216 15:29:36.810208 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721697 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" exitCode=0 Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721900 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qz8rs" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" containerID="cri-o://a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" gracePeriod=2 Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.336431 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.465829 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities" (OuterVolumeSpecName: "utilities") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.466170 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.472898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn" (OuterVolumeSpecName: "kube-api-access-txfhn") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "kube-api-access-txfhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.531139 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.568888 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.568925 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736137 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" exitCode=0 Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"c1cd38255b851b38bfdf2fa0e752842971171b977b22335433117f4a4d1e8923"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736329 4705 scope.go:117] "RemoveContainer" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736259 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.741056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.757692 4705 scope.go:117] "RemoveContainer" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.773458 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmwcz" podStartSLOduration=3.304821954 podStartE2EDuration="6.771496563s" podCreationTimestamp="2026-02-16 15:29:32 +0000 UTC" firstStartedPulling="2026-02-16 15:29:34.683864225 +0000 UTC m=+2168.868841301" lastFinishedPulling="2026-02-16 15:29:38.150538834 +0000 UTC m=+2172.335515910" observedRunningTime="2026-02-16 15:29:38.760879033 +0000 UTC m=+2172.945856119" watchObservedRunningTime="2026-02-16 15:29:38.771496563 +0000 UTC m=+2172.956473649" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.790770 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.797508 4705 scope.go:117] "RemoveContainer" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.801843 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.855911 4705 scope.go:117] "RemoveContainer" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.856475 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": container with ID starting with a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b not found: ID does not exist" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.856528 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} err="failed to get container status \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": rpc error: code = NotFound desc = could not find container \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": container with ID starting with a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b not found: ID does not exist" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.856565 4705 scope.go:117] "RemoveContainer" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.857063 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": container with ID starting with 28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011 not found: ID does not exist" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.857099 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} err="failed to get container status \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": rpc error: code = NotFound desc = could not find container \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": container with ID starting with 28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011 not found: ID does not exist" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.857128 4705 scope.go:117] "RemoveContainer" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.858831 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": container with ID starting with 98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff not found: ID does not exist" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.858864 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff"} err="failed to get container status \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": rpc error: code = NotFound desc = could not find container \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": container with ID starting with 98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff not found: ID does not exist" Feb 16 15:29:40 crc kubenswrapper[4705]: I0216 15:29:40.438828 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" path="/var/lib/kubelet/pods/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5/volumes" Feb 16 15:29:42 crc kubenswrapper[4705]: I0216 15:29:42.952738 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:42 crc kubenswrapper[4705]: I0216 15:29:42.953682 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:44 crc kubenswrapper[4705]: I0216 15:29:44.021215 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dmwcz" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" probeResult="failure" output=< Feb 16 15:29:44 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:29:44 crc kubenswrapper[4705]: > Feb 16 15:29:45 crc kubenswrapper[4705]: E0216 15:29:45.421573 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:45 crc kubenswrapper[4705]: E0216 15:29:45.421605 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.042586 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.104311 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.283608 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:54 crc kubenswrapper[4705]: I0216 15:29:54.982913 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmwcz" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" containerID="cri-o://886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" gracePeriod=2 Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.550102 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662227 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662725 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662909 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.664518 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities" (OuterVolumeSpecName: "utilities") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.669915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55" (OuterVolumeSpecName: "kube-api-access-rsn55") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "kube-api-access-rsn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.710748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767083 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767145 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767164 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994789 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" exitCode=0 Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994841 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"3ad7e362a5d5fec61f0b51b0a86fc6db1eddbaabfe14cce7548c481ee1985bf8"} Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994898 4705 scope.go:117] "RemoveContainer" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994919 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.018195 4705 scope.go:117] "RemoveContainer" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.049196 4705 scope.go:117] "RemoveContainer" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.056765 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.069106 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129145 4705 scope.go:117] "RemoveContainer" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.129826 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": container with ID starting with 886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d not found: ID does not exist" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129869 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} err="failed to get container status \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": rpc error: code = NotFound desc = could not find container \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": container with ID starting with 886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129895 4705 scope.go:117] "RemoveContainer" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.130289 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": container with ID starting with d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1 not found: ID does not exist" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130311 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} err="failed to get container status \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": rpc error: code = NotFound desc = could not find container \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": container with ID starting with d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1 not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130324 4705 scope.go:117] "RemoveContainer" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.130679 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": container with ID starting with bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd not found: ID does not exist" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130706 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd"} err="failed to get container status \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": rpc error: code = NotFound desc = could not find container \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": container with ID starting with bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.428255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.431983 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" path="/var/lib/kubelet/pods/be21c4cc-f0fe-4e3e-aac6-1dabd8957912/volumes" Feb 16 15:29:57 crc kubenswrapper[4705]: E0216 15:29:57.421993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.159160 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160212 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160227 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160246 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160252 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160275 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160282 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160297 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160303 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160331 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160337 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160350 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160356 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160573 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160612 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.161618 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.163797 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.164204 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.173252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.304867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.305006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.305071 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408174 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.409340 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.425872 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.439395 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.660474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:01 crc kubenswrapper[4705]: I0216 15:30:01.172015 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065125 4705 generic.go:334] "Generic (PLEG): container finished" podID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerID="3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b" exitCode=0 Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerDied","Data":"3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b"} Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065817 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerStarted","Data":"28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca"} Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.510664 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644117 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644353 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644411 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.645351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.650697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69" (OuterVolumeSpecName: "kube-api-access-mrv69") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "kube-api-access-mrv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.650888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748110 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748284 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748306 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerDied","Data":"28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca"} Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097825 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097876 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.606246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.622141 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 15:30:06 crc kubenswrapper[4705]: I0216 15:30:06.432849 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" path="/var/lib/kubelet/pods/fc25ae00-316a-4dfb-8a83-72fe2318da5e/volumes" Feb 16 15:30:08 crc kubenswrapper[4705]: E0216 15:30:08.421353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:11 crc kubenswrapper[4705]: I0216 15:30:11.423544 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559239 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559605 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559749 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.561035 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.525600 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.527160 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.527357 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.528880 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:25 crc kubenswrapper[4705]: E0216 15:30:25.424043 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:31 crc kubenswrapper[4705]: E0216 15:30:31.425184 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:31 crc kubenswrapper[4705]: I0216 15:30:31.684478 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:30:31 crc kubenswrapper[4705]: I0216 15:30:31.684560 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:30:32 crc kubenswrapper[4705]: I0216 15:30:32.437531 4705 generic.go:334] "Generic (PLEG): container finished" podID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerID="28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba" exitCode=2 Feb 16 15:30:32 crc kubenswrapper[4705]: I0216 15:30:32.441730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerDied","Data":"28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba"} Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.036594 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.129860 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.129954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.130203 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.139096 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82" (OuterVolumeSpecName: "kube-api-access-vvj82") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "kube-api-access-vvj82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.180585 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory" (OuterVolumeSpecName: "inventory") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.202957 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235051 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235109 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235132 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerDied","Data":"1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b"} Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476162 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476295 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.523444 4705 scope.go:117] "RemoveContainer" containerID="5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97" Feb 16 15:30:40 crc kubenswrapper[4705]: E0216 15:30:40.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.050608 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:42 crc kubenswrapper[4705]: E0216 15:30:42.052306 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.052343 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: E0216 15:30:42.052448 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.052473 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.053211 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.053287 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.056825 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.061587 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.062421 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.064526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.065195 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.065679 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.316153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.316669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.326691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.390795 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:43 crc kubenswrapper[4705]: I0216 15:30:43.064658 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:43 crc kubenswrapper[4705]: I0216 15:30:43.614471 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerStarted","Data":"3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89"} Feb 16 15:30:44 crc kubenswrapper[4705]: I0216 15:30:44.630084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerStarted","Data":"1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae"} Feb 16 15:30:44 crc kubenswrapper[4705]: I0216 15:30:44.665910 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" podStartSLOduration=2.168536301 podStartE2EDuration="2.665888133s" podCreationTimestamp="2026-02-16 15:30:42 +0000 UTC" firstStartedPulling="2026-02-16 15:30:43.079723391 +0000 UTC m=+2237.264700467" lastFinishedPulling="2026-02-16 15:30:43.577075223 +0000 UTC m=+2237.762052299" observedRunningTime="2026-02-16 15:30:44.653342409 +0000 UTC m=+2238.838319485" watchObservedRunningTime="2026-02-16 15:30:44.665888133 +0000 UTC m=+2238.850865209" Feb 16 15:30:45 crc kubenswrapper[4705]: E0216 15:30:45.423529 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:52 crc kubenswrapper[4705]: E0216 15:30:52.423706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:57 crc kubenswrapper[4705]: E0216 15:30:57.423555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:01 crc kubenswrapper[4705]: I0216 15:31:01.684719 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:31:01 crc kubenswrapper[4705]: I0216 15:31:01.685454 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:31:04 crc kubenswrapper[4705]: E0216 15:31:04.423046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:12 crc kubenswrapper[4705]: E0216 15:31:12.423353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:15 crc kubenswrapper[4705]: E0216 15:31:15.423437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:24 crc kubenswrapper[4705]: E0216 15:31:24.423840 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:27 crc kubenswrapper[4705]: E0216 15:31:27.424423 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.788690 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.793128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.801670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926688 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926739 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926924 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.030882 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031722 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031494 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.064700 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.138691 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.745597 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437200 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" exitCode=0 Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69"} Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"4bf4106a7d3133a69edfc0af3627e17b7f3a8e4a9a69e05595b74dffae5ac445"} Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.451934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.685597 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.686096 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.686385 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.687565 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.687706 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" gracePeriod=600 Feb 16 15:31:31 crc kubenswrapper[4705]: E0216 15:31:31.842061 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.477186 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" exitCode=0 Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.477307 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.486936 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" exitCode=0 Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.487006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.487043 4705 scope.go:117] "RemoveContainer" containerID="7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.491993 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:32 crc kubenswrapper[4705]: E0216 15:31:32.494937 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:33 crc kubenswrapper[4705]: I0216 15:31:33.502546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} Feb 16 15:31:33 crc kubenswrapper[4705]: I0216 15:31:33.527749 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9dwp" podStartSLOduration=3.058329383 podStartE2EDuration="5.527718064s" podCreationTimestamp="2026-02-16 15:31:28 +0000 UTC" firstStartedPulling="2026-02-16 15:31:30.441598804 +0000 UTC m=+2284.626575880" lastFinishedPulling="2026-02-16 15:31:32.910987465 +0000 UTC m=+2287.095964561" observedRunningTime="2026-02-16 15:31:33.526383866 +0000 UTC m=+2287.711360942" watchObservedRunningTime="2026-02-16 15:31:33.527718064 +0000 UTC m=+2287.712695140" Feb 16 15:31:35 crc kubenswrapper[4705]: E0216 15:31:35.422848 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.139226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.140906 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.226303 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.639254 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.707623 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:41 crc kubenswrapper[4705]: E0216 15:31:41.424065 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:41 crc kubenswrapper[4705]: I0216 15:31:41.590137 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9dwp" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" containerID="cri-o://081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" gracePeriod=2 Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.188654 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.378505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.378990 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.379086 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.380564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities" (OuterVolumeSpecName: "utilities") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.389078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2" (OuterVolumeSpecName: "kube-api-access-lxgx2") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "kube-api-access-lxgx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.416078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486856 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486904 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486921 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610635 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" exitCode=0 Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610686 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610723 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"4bf4106a7d3133a69edfc0af3627e17b7f3a8e4a9a69e05595b74dffae5ac445"} Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610749 4705 scope.go:117] "RemoveContainer" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610913 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.647774 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.678485 4705 scope.go:117] "RemoveContainer" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.681878 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.715015 4705 scope.go:117] "RemoveContainer" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.769881 4705 scope.go:117] "RemoveContainer" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771267 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": container with ID starting with 081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3 not found: ID does not exist" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771298 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} err="failed to get container status \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": rpc error: code = NotFound desc = could not find container \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": container with ID starting with 081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3 not found: ID does not exist" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771320 4705 scope.go:117] "RemoveContainer" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771634 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": container with ID starting with c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588 not found: ID does not exist" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} err="failed to get container status \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": rpc error: code = NotFound desc = could not find container \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": container with ID starting with c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588 not found: ID does not exist" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771671 4705 scope.go:117] "RemoveContainer" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771856 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": container with ID starting with 3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69 not found: ID does not exist" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771875 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69"} err="failed to get container status \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": rpc error: code = NotFound desc = could not find container \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": container with ID starting with 3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69 not found: ID does not exist" Feb 16 15:31:44 crc kubenswrapper[4705]: I0216 15:31:44.444653 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" path="/var/lib/kubelet/pods/65b54e01-a38c-4506-ae81-64e233cb63d8/volumes" Feb 16 15:31:45 crc kubenswrapper[4705]: I0216 15:31:45.437182 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:45 crc kubenswrapper[4705]: E0216 15:31:45.439177 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:48 crc kubenswrapper[4705]: E0216 15:31:48.425460 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:53 crc kubenswrapper[4705]: E0216 15:31:53.430666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:59 crc kubenswrapper[4705]: I0216 15:31:59.421352 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:59 crc kubenswrapper[4705]: E0216 15:31:59.423314 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:03 crc kubenswrapper[4705]: E0216 15:32:03.423478 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:04 crc kubenswrapper[4705]: E0216 15:32:04.420707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:12 crc kubenswrapper[4705]: I0216 15:32:12.420541 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:12 crc kubenswrapper[4705]: E0216 15:32:12.422015 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:18 crc kubenswrapper[4705]: E0216 15:32:18.423713 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:18 crc kubenswrapper[4705]: E0216 15:32:18.427685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:23 crc kubenswrapper[4705]: I0216 15:32:23.420125 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:23 crc kubenswrapper[4705]: E0216 15:32:23.421022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:29 crc kubenswrapper[4705]: E0216 15:32:29.422816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:29 crc kubenswrapper[4705]: E0216 15:32:29.423006 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:34 crc kubenswrapper[4705]: I0216 15:32:34.420110 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:34 crc kubenswrapper[4705]: E0216 15:32:34.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:42 crc kubenswrapper[4705]: E0216 15:32:42.429077 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:44 crc kubenswrapper[4705]: E0216 15:32:44.423257 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:47 crc kubenswrapper[4705]: I0216 15:32:47.420267 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:47 crc kubenswrapper[4705]: E0216 15:32:47.420729 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:54 crc kubenswrapper[4705]: E0216 15:32:54.424632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:57 crc kubenswrapper[4705]: E0216 15:32:57.424149 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:02 crc kubenswrapper[4705]: I0216 15:33:01.419824 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:02 crc kubenswrapper[4705]: E0216 15:33:01.420912 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:08 crc kubenswrapper[4705]: E0216 15:33:08.425279 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:10 crc kubenswrapper[4705]: E0216 15:33:10.422332 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:16 crc kubenswrapper[4705]: I0216 15:33:16.434498 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:16 crc kubenswrapper[4705]: E0216 15:33:16.435868 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:21 crc kubenswrapper[4705]: E0216 15:33:21.422534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:22 crc kubenswrapper[4705]: E0216 15:33:22.421900 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:28 crc kubenswrapper[4705]: I0216 15:33:28.420023 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:28 crc kubenswrapper[4705]: E0216 15:33:28.421070 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:35 crc kubenswrapper[4705]: E0216 15:33:35.424577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:36 crc kubenswrapper[4705]: E0216 15:33:36.437994 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:43 crc kubenswrapper[4705]: I0216 15:33:43.421326 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:43 crc kubenswrapper[4705]: E0216 15:33:43.422669 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:48 crc kubenswrapper[4705]: E0216 15:33:48.427640 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:49 crc kubenswrapper[4705]: E0216 15:33:49.422971 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:57 crc kubenswrapper[4705]: I0216 15:33:57.424319 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:57 crc kubenswrapper[4705]: E0216 15:33:57.426833 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:01 crc kubenswrapper[4705]: E0216 15:34:01.425111 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:01 crc kubenswrapper[4705]: E0216 15:34:01.425339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:10 crc kubenswrapper[4705]: I0216 15:34:10.421155 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:10 crc kubenswrapper[4705]: E0216 15:34:10.422281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:14 crc kubenswrapper[4705]: E0216 15:34:14.423806 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:16 crc kubenswrapper[4705]: E0216 15:34:16.437053 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:21 crc kubenswrapper[4705]: I0216 15:34:21.420037 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:21 crc kubenswrapper[4705]: E0216 15:34:21.420922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:29 crc kubenswrapper[4705]: E0216 15:34:29.422842 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:29 crc kubenswrapper[4705]: E0216 15:34:29.422889 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:34 crc kubenswrapper[4705]: I0216 15:34:34.420671 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:34 crc kubenswrapper[4705]: E0216 15:34:34.421669 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:40 crc kubenswrapper[4705]: E0216 15:34:40.455016 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:41 crc kubenswrapper[4705]: E0216 15:34:41.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:48 crc kubenswrapper[4705]: I0216 15:34:48.423302 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:48 crc kubenswrapper[4705]: E0216 15:34:48.424441 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:51 crc kubenswrapper[4705]: E0216 15:34:51.422826 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:56 crc kubenswrapper[4705]: E0216 15:34:56.423510 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:03 crc kubenswrapper[4705]: I0216 15:35:03.424898 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:03 crc kubenswrapper[4705]: E0216 15:35:03.425851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:05 crc kubenswrapper[4705]: E0216 15:35:05.422915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:09 crc kubenswrapper[4705]: E0216 15:35:09.422802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:16 crc kubenswrapper[4705]: I0216 15:35:16.435897 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:16 crc kubenswrapper[4705]: E0216 15:35:16.437323 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:18 crc kubenswrapper[4705]: E0216 15:35:18.423261 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:20 crc kubenswrapper[4705]: I0216 15:35:20.423013 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566477 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566567 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566733 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.567948 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:27 crc kubenswrapper[4705]: I0216 15:35:27.419788 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:27 crc kubenswrapper[4705]: E0216 15:35:27.421409 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.559955 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.560596 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.560822 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.561974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:35 crc kubenswrapper[4705]: E0216 15:35:35.422488 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:42 crc kubenswrapper[4705]: I0216 15:35:42.421038 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:42 crc kubenswrapper[4705]: E0216 15:35:42.422495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:45 crc kubenswrapper[4705]: E0216 15:35:45.424106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:46 crc kubenswrapper[4705]: E0216 15:35:46.447508 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:57 crc kubenswrapper[4705]: I0216 15:35:57.421350 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:57 crc kubenswrapper[4705]: E0216 15:35:57.422998 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:57 crc kubenswrapper[4705]: E0216 15:35:57.426665 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:00 crc kubenswrapper[4705]: E0216 15:36:00.436212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:08 crc kubenswrapper[4705]: E0216 15:36:08.423862 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:11 crc kubenswrapper[4705]: I0216 15:36:11.419902 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:11 crc kubenswrapper[4705]: E0216 15:36:11.420775 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:36:15 crc kubenswrapper[4705]: E0216 15:36:15.424036 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:21 crc kubenswrapper[4705]: E0216 15:36:21.424549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:24 crc kubenswrapper[4705]: I0216 15:36:24.419755 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:24 crc kubenswrapper[4705]: E0216 15:36:24.420421 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:36:30 crc kubenswrapper[4705]: E0216 15:36:30.424399 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:34 crc kubenswrapper[4705]: E0216 15:36:34.425167 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:36 crc kubenswrapper[4705]: I0216 15:36:36.432492 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:37 crc kubenswrapper[4705]: I0216 15:36:37.602851 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} Feb 16 15:36:42 crc kubenswrapper[4705]: E0216 15:36:42.426555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:47 crc kubenswrapper[4705]: E0216 15:36:47.426362 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:57 crc kubenswrapper[4705]: E0216 15:36:57.422981 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:58 crc kubenswrapper[4705]: E0216 15:36:58.424767 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:02 crc kubenswrapper[4705]: I0216 15:37:02.905429 4705 generic.go:334] "Generic (PLEG): container finished" podID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerID="1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae" exitCode=2 Feb 16 15:37:02 crc kubenswrapper[4705]: I0216 15:37:02.905525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerDied","Data":"1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae"} Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.487074 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.559981 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.560817 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.561068 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.574828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7" (OuterVolumeSpecName: "kube-api-access-m4nb7") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "kube-api-access-m4nb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.599292 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.612720 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory" (OuterVolumeSpecName: "inventory") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664419 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664696 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664769 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerDied","Data":"3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89"} Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926483 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926073 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:37:11 crc kubenswrapper[4705]: E0216 15:37:11.422816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:12 crc kubenswrapper[4705]: E0216 15:37:12.421757 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.043119 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044929 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.044952 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044966 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-utilities" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.044974 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-utilities" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045005 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.045022 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-content" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045029 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-content" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045386 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045419 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.046702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052783 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052835 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052779 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.059017 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.071997 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182424 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285579 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.293002 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.305148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.307896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.385751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:23 crc kubenswrapper[4705]: I0216 15:37:23.109986 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:23 crc kubenswrapper[4705]: I0216 15:37:23.181751 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerStarted","Data":"c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615"} Feb 16 15:37:24 crc kubenswrapper[4705]: I0216 15:37:24.198462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerStarted","Data":"339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac"} Feb 16 15:37:24 crc kubenswrapper[4705]: I0216 15:37:24.236014 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" podStartSLOduration=1.7886381180000002 podStartE2EDuration="2.235987229s" podCreationTimestamp="2026-02-16 15:37:22 +0000 UTC" firstStartedPulling="2026-02-16 15:37:23.113938534 +0000 UTC m=+2637.298915600" lastFinishedPulling="2026-02-16 15:37:23.561287635 +0000 UTC m=+2637.746264711" observedRunningTime="2026-02-16 15:37:24.222747315 +0000 UTC m=+2638.407724421" watchObservedRunningTime="2026-02-16 15:37:24.235987229 +0000 UTC m=+2638.420964315" Feb 16 15:37:26 crc kubenswrapper[4705]: E0216 15:37:26.428892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:27 crc kubenswrapper[4705]: E0216 15:37:27.422160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:38 crc kubenswrapper[4705]: E0216 15:37:38.423740 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:42 crc kubenswrapper[4705]: E0216 15:37:42.422493 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:53 crc kubenswrapper[4705]: E0216 15:37:53.424667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:55 crc kubenswrapper[4705]: E0216 15:37:55.422580 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:06 crc kubenswrapper[4705]: E0216 15:38:06.431214 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:06 crc kubenswrapper[4705]: E0216 15:38:06.431345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:17 crc kubenswrapper[4705]: E0216 15:38:17.421990 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:17 crc kubenswrapper[4705]: E0216 15:38:17.422066 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:29 crc kubenswrapper[4705]: E0216 15:38:29.422545 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:31 crc kubenswrapper[4705]: E0216 15:38:31.421660 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:42 crc kubenswrapper[4705]: E0216 15:38:42.422593 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:44 crc kubenswrapper[4705]: E0216 15:38:44.421268 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:53 crc kubenswrapper[4705]: E0216 15:38:53.423987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:56 crc kubenswrapper[4705]: E0216 15:38:56.428678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:01 crc kubenswrapper[4705]: I0216 15:39:01.685717 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:39:01 crc kubenswrapper[4705]: I0216 15:39:01.686726 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:39:08 crc kubenswrapper[4705]: E0216 15:39:08.423742 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:08 crc kubenswrapper[4705]: E0216 15:39:08.423892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:20 crc kubenswrapper[4705]: E0216 15:39:20.422305 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:21 crc kubenswrapper[4705]: E0216 15:39:21.422979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:31 crc kubenswrapper[4705]: I0216 15:39:31.684340 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:39:31 crc kubenswrapper[4705]: I0216 15:39:31.685415 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:39:32 crc kubenswrapper[4705]: E0216 15:39:32.422928 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:32 crc kubenswrapper[4705]: E0216 15:39:32.423548 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:45 crc kubenswrapper[4705]: E0216 15:39:45.424617 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:45 crc kubenswrapper[4705]: E0216 15:39:45.425701 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:56 crc kubenswrapper[4705]: E0216 15:39:56.431046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:57 crc kubenswrapper[4705]: E0216 15:39:57.423894 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.683884 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.684559 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.684614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.685693 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.685760 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" gracePeriod=600 Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137349 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" exitCode=0 Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137902 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:40:07 crc kubenswrapper[4705]: E0216 15:40:07.421873 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:08 crc kubenswrapper[4705]: E0216 15:40:08.432812 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:18 crc kubenswrapper[4705]: E0216 15:40:18.423577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:22 crc kubenswrapper[4705]: E0216 15:40:22.425071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.745209 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.762651 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.762816 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.832990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.833151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.833204 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.943544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.944128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.947487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.948219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.948909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.978460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:23 crc kubenswrapper[4705]: I0216 15:40:23.097763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:23.671091 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.473192 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e" exitCode=0 Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.473629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e"} Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.474188 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"eda671dae6a4b001a13bb9df0f6a3c3fc919f1941fb2808a4b7428464c673a61"} Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.478097 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:40:26 crc kubenswrapper[4705]: I0216 15:40:26.511244 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd"} Feb 16 15:40:29 crc kubenswrapper[4705]: E0216 15:40:29.923940 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf237e260_e672_4b6e_8c0d_1fea39f1724f.slice/crio-898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:40:30 crc kubenswrapper[4705]: I0216 15:40:30.580747 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd" exitCode=0 Feb 16 15:40:30 crc kubenswrapper[4705]: I0216 15:40:30.580880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd"} Feb 16 15:40:31 crc kubenswrapper[4705]: I0216 15:40:31.600229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3"} Feb 16 15:40:31 crc kubenswrapper[4705]: I0216 15:40:31.644507 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4ngb" podStartSLOduration=3.070855288 podStartE2EDuration="9.644467021s" podCreationTimestamp="2026-02-16 15:40:22 +0000 UTC" firstStartedPulling="2026-02-16 15:40:24.477777506 +0000 UTC m=+2818.662754592" lastFinishedPulling="2026-02-16 15:40:31.051389239 +0000 UTC m=+2825.236366325" observedRunningTime="2026-02-16 15:40:31.625209007 +0000 UTC m=+2825.810186103" watchObservedRunningTime="2026-02-16 15:40:31.644467021 +0000 UTC m=+2825.829444127" Feb 16 15:40:33 crc kubenswrapper[4705]: I0216 15:40:33.098742 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:33 crc kubenswrapper[4705]: I0216 15:40:33.098838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553227 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553722 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553862 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.555386 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:34 crc kubenswrapper[4705]: I0216 15:40:34.148971 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4ngb" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" probeResult="failure" output=< Feb 16 15:40:34 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:40:34 crc kubenswrapper[4705]: > Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.549613 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.550633 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.550838 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.552189 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.437171 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.440239 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.440341 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.566034 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.566593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.568661 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.670789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.670925 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.694461 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.773129 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.204188 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.271443 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.400435 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734335 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" exitCode=0 Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb"} Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"d54a0018ea82a8a39b4fd22b98aae1c3a3f867a3ad7bbd769da6bc2503e4a5b6"} Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.581798 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.582760 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4ngb" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" containerID="cri-o://176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" gracePeriod=2 Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.760644 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" exitCode=0 Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.760730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3"} Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.763590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.161736 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202490 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202624 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202717 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.204241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities" (OuterVolumeSpecName: "utilities") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.212106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt" (OuterVolumeSpecName: "kube-api-access-dtxzt") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "kube-api-access-dtxzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.306646 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.306695 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.329998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.409449 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: E0216 15:40:46.422324 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.778585 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" exitCode=0 Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.778773 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"eda671dae6a4b001a13bb9df0f6a3c3fc919f1941fb2808a4b7428464c673a61"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793772 4705 scope.go:117] "RemoveContainer" containerID="176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793808 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.840143 4705 scope.go:117] "RemoveContainer" containerID="898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.842971 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.864763 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.869934 4705 scope.go:117] "RemoveContainer" containerID="219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e" Feb 16 15:40:47 crc kubenswrapper[4705]: I0216 15:40:47.809887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} Feb 16 15:40:47 crc kubenswrapper[4705]: I0216 15:40:47.841866 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5x5h7" podStartSLOduration=2.332956911 podStartE2EDuration="5.841838792s" podCreationTimestamp="2026-02-16 15:40:42 +0000 UTC" firstStartedPulling="2026-02-16 15:40:43.738222532 +0000 UTC m=+2837.923199618" lastFinishedPulling="2026-02-16 15:40:47.247104413 +0000 UTC m=+2841.432081499" observedRunningTime="2026-02-16 15:40:47.829321759 +0000 UTC m=+2842.014298835" watchObservedRunningTime="2026-02-16 15:40:47.841838792 +0000 UTC m=+2842.026815868" Feb 16 15:40:48 crc kubenswrapper[4705]: I0216 15:40:48.434198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" path="/var/lib/kubelet/pods/f237e260-e672-4b6e-8c0d-1fea39f1724f/volumes" Feb 16 15:40:51 crc kubenswrapper[4705]: E0216 15:40:51.423906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.774843 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.775643 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.852819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.943521 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:53 crc kubenswrapper[4705]: I0216 15:40:53.995333 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:54 crc kubenswrapper[4705]: I0216 15:40:54.900183 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5x5h7" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" containerID="cri-o://5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" gracePeriod=2 Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.469298 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.585891 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.585976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.586095 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.587056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities" (OuterVolumeSpecName: "utilities") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.587831 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.592646 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv" (OuterVolumeSpecName: "kube-api-access-548bv") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "kube-api-access-548bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.662943 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.690801 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.690883 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912155 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" exitCode=0 Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912211 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"d54a0018ea82a8a39b4fd22b98aae1c3a3f867a3ad7bbd769da6bc2503e4a5b6"} Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912322 4705 scope.go:117] "RemoveContainer" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912316 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.937816 4705 scope.go:117] "RemoveContainer" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.977751 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.977812 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.985325 4705 scope.go:117] "RemoveContainer" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.039491 4705 scope.go:117] "RemoveContainer" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.040286 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": container with ID starting with 5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90 not found: ID does not exist" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.040358 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} err="failed to get container status \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": rpc error: code = NotFound desc = could not find container \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": container with ID starting with 5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90 not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.040442 4705 scope.go:117] "RemoveContainer" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.041195 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": container with ID starting with 81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077 not found: ID does not exist" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041235 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} err="failed to get container status \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": rpc error: code = NotFound desc = could not find container \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": container with ID starting with 81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077 not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041263 4705 scope.go:117] "RemoveContainer" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.041775 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": container with ID starting with 2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb not found: ID does not exist" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041898 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb"} err="failed to get container status \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": rpc error: code = NotFound desc = could not find container \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": container with ID starting with 2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.442391 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39635490-f866-4108-9281-6105560b35a2" path="/var/lib/kubelet/pods/39635490-f866-4108-9281-6105560b35a2/volumes" Feb 16 15:40:59 crc kubenswrapper[4705]: E0216 15:40:59.423515 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:06 crc kubenswrapper[4705]: E0216 15:41:06.432276 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:11 crc kubenswrapper[4705]: E0216 15:41:11.422406 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:21 crc kubenswrapper[4705]: E0216 15:41:21.424071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:25 crc kubenswrapper[4705]: E0216 15:41:25.422750 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.340516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342160 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342179 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342201 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342210 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342231 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342240 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342265 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342274 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342294 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342304 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342333 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342341 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.343756 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.343790 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.347162 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.369938 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.431704 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.478915 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.479140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.479214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583177 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.584065 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.584604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.615169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.687331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.368131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.613604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.613682 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"9c3f94922ab40aed56fdd237b60b4af28ecc566a8e21d1d0b407ff4b18711778"} Feb 16 15:41:38 crc kubenswrapper[4705]: I0216 15:41:38.629394 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" exitCode=0 Feb 16 15:41:38 crc kubenswrapper[4705]: I0216 15:41:38.629463 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} Feb 16 15:41:39 crc kubenswrapper[4705]: E0216 15:41:39.421791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:39 crc kubenswrapper[4705]: I0216 15:41:39.641762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} Feb 16 15:41:40 crc kubenswrapper[4705]: I0216 15:41:40.657520 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" exitCode=0 Feb 16 15:41:40 crc kubenswrapper[4705]: I0216 15:41:40.657615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} Feb 16 15:41:41 crc kubenswrapper[4705]: I0216 15:41:41.676707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.689010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.689740 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.785441 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.817685 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ffz8" podStartSLOduration=8.364069997 podStartE2EDuration="10.817658438s" podCreationTimestamp="2026-02-16 15:41:36 +0000 UTC" firstStartedPulling="2026-02-16 15:41:38.63303931 +0000 UTC m=+2892.818016386" lastFinishedPulling="2026-02-16 15:41:41.086627751 +0000 UTC m=+2895.271604827" observedRunningTime="2026-02-16 15:41:41.717826597 +0000 UTC m=+2895.902803703" watchObservedRunningTime="2026-02-16 15:41:46.817658438 +0000 UTC m=+2901.002635524" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.849227 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:47 crc kubenswrapper[4705]: I0216 15:41:47.039830 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:47 crc kubenswrapper[4705]: E0216 15:41:47.423645 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:48 crc kubenswrapper[4705]: I0216 15:41:48.748710 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ffz8" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" containerID="cri-o://42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" gracePeriod=2 Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.389263 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435132 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435236 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435390 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.449171 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j" (OuterVolumeSpecName: "kube-api-access-tmm2j") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "kube-api-access-tmm2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.465628 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities" (OuterVolumeSpecName: "utilities") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.496294 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539787 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539834 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539847 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762284 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" exitCode=0 Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762399 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"9c3f94922ab40aed56fdd237b60b4af28ecc566a8e21d1d0b407ff4b18711778"} Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762419 4705 scope.go:117] "RemoveContainer" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762434 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.796299 4705 scope.go:117] "RemoveContainer" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.816398 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.828199 4705 scope.go:117] "RemoveContainer" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.829485 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886326 4705 scope.go:117] "RemoveContainer" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.886767 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": container with ID starting with 42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a not found: ID does not exist" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886811 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} err="failed to get container status \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": rpc error: code = NotFound desc = could not find container \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": container with ID starting with 42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a not found: ID does not exist" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886839 4705 scope.go:117] "RemoveContainer" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.887094 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": container with ID starting with 7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8 not found: ID does not exist" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887117 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} err="failed to get container status \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": rpc error: code = NotFound desc = could not find container \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": container with ID starting with 7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8 not found: ID does not exist" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887129 4705 scope.go:117] "RemoveContainer" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.887348 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": container with ID starting with 31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936 not found: ID does not exist" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887402 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} err="failed to get container status \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": rpc error: code = NotFound desc = could not find container \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": container with ID starting with 31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936 not found: ID does not exist" Feb 16 15:41:50 crc kubenswrapper[4705]: E0216 15:41:50.421440 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:50 crc kubenswrapper[4705]: I0216 15:41:50.431947 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" path="/var/lib/kubelet/pods/38f0818c-3ed8-45c0-825d-90cbd55d5fb0/volumes" Feb 16 15:41:58 crc kubenswrapper[4705]: E0216 15:41:58.426723 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:02 crc kubenswrapper[4705]: E0216 15:42:02.422225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:10 crc kubenswrapper[4705]: E0216 15:42:10.423615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:13 crc kubenswrapper[4705]: E0216 15:42:13.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:25 crc kubenswrapper[4705]: E0216 15:42:25.421497 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:28 crc kubenswrapper[4705]: E0216 15:42:28.422528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:31 crc kubenswrapper[4705]: I0216 15:42:31.685050 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:42:31 crc kubenswrapper[4705]: I0216 15:42:31.686092 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:42:39 crc kubenswrapper[4705]: E0216 15:42:39.423832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:40 crc kubenswrapper[4705]: E0216 15:42:40.423111 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:50 crc kubenswrapper[4705]: E0216 15:42:50.427942 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:55 crc kubenswrapper[4705]: E0216 15:42:55.422314 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:01 crc kubenswrapper[4705]: I0216 15:43:01.686644 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:01 crc kubenswrapper[4705]: I0216 15:43:01.687145 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:03 crc kubenswrapper[4705]: E0216 15:43:03.421789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:09 crc kubenswrapper[4705]: E0216 15:43:09.424247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:14 crc kubenswrapper[4705]: E0216 15:43:14.423720 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:23 crc kubenswrapper[4705]: E0216 15:43:23.423954 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:29 crc kubenswrapper[4705]: E0216 15:43:29.422022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.684398 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.685054 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.685146 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.686859 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.686992 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" gracePeriod=600 Feb 16 15:43:31 crc kubenswrapper[4705]: E0216 15:43:31.827851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076161 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" exitCode=0 Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076282 4705 scope.go:117] "RemoveContainer" containerID="8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.077351 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:43:32 crc kubenswrapper[4705]: E0216 15:43:32.077890 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:38 crc kubenswrapper[4705]: E0216 15:43:38.458244 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:39 crc kubenswrapper[4705]: I0216 15:43:39.196021 4705 generic.go:334] "Generic (PLEG): container finished" podID="5c695fba-8bed-4549-98f9-b708893eab8e" containerID="339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac" exitCode=2 Feb 16 15:43:39 crc kubenswrapper[4705]: I0216 15:43:39.196112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerDied","Data":"339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac"} Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.775037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.836509 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.836945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.837069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.861795 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2" (OuterVolumeSpecName: "kube-api-access-cx4k2") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "kube-api-access-cx4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.893878 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory" (OuterVolumeSpecName: "inventory") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.899108 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940579 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940615 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940625 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerDied","Data":"c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615"} Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232617 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615" Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232230 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:43:41 crc kubenswrapper[4705]: E0216 15:43:41.424138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:45 crc kubenswrapper[4705]: I0216 15:43:45.420690 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:43:45 crc kubenswrapper[4705]: E0216 15:43:45.421528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:52 crc kubenswrapper[4705]: E0216 15:43:52.422987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:53 crc kubenswrapper[4705]: E0216 15:43:53.421649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:00 crc kubenswrapper[4705]: I0216 15:44:00.424137 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:00 crc kubenswrapper[4705]: E0216 15:44:00.425754 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:03 crc kubenswrapper[4705]: E0216 15:44:03.422902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:04 crc kubenswrapper[4705]: E0216 15:44:04.421629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:12 crc kubenswrapper[4705]: I0216 15:44:12.420872 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:12 crc kubenswrapper[4705]: E0216 15:44:12.421857 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:15 crc kubenswrapper[4705]: E0216 15:44:15.421589 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.043496 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044442 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-utilities" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-utilities" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044482 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044488 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044501 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044508 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044536 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-content" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044542 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-content" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044910 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044925 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.046005 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.049828 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.050199 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.052356 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.052702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.057066 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299203 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.305971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.306038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.317483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.369555 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.932094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:19 crc kubenswrapper[4705]: E0216 15:44:19.424079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.740839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerStarted","Data":"dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a"} Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.741143 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerStarted","Data":"842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4"} Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.760727 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" podStartSLOduration=1.331502704 podStartE2EDuration="1.760697867s" podCreationTimestamp="2026-02-16 15:44:18 +0000 UTC" firstStartedPulling="2026-02-16 15:44:18.935786016 +0000 UTC m=+3053.120763092" lastFinishedPulling="2026-02-16 15:44:19.364981169 +0000 UTC m=+3053.549958255" observedRunningTime="2026-02-16 15:44:19.756142618 +0000 UTC m=+3053.941119694" watchObservedRunningTime="2026-02-16 15:44:19.760697867 +0000 UTC m=+3053.945674943" Feb 16 15:44:24 crc kubenswrapper[4705]: I0216 15:44:24.419560 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:24 crc kubenswrapper[4705]: E0216 15:44:24.420452 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:26 crc kubenswrapper[4705]: E0216 15:44:26.437638 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:33 crc kubenswrapper[4705]: E0216 15:44:33.422959 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:36 crc kubenswrapper[4705]: I0216 15:44:36.430336 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:36 crc kubenswrapper[4705]: E0216 15:44:36.433056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:39 crc kubenswrapper[4705]: E0216 15:44:39.423772 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:47 crc kubenswrapper[4705]: E0216 15:44:47.422134 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:49 crc kubenswrapper[4705]: I0216 15:44:49.420887 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:49 crc kubenswrapper[4705]: E0216 15:44:49.421934 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:51 crc kubenswrapper[4705]: E0216 15:44:51.422693 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:59 crc kubenswrapper[4705]: E0216 15:44:59.422752 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.168806 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.171030 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.173132 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.173235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181645 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181806 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.191778 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284565 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.285520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.296840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.302854 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.420628 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:00 crc kubenswrapper[4705]: E0216 15:45:00.421260 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.497559 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:01 crc kubenswrapper[4705]: I0216 15:45:01.541813 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546667 4705 generic.go:334] "Generic (PLEG): container finished" podID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerID="c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408" exitCode=0 Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546771 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerDied","Data":"c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408"} Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546996 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerStarted","Data":"506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c"} Feb 16 15:45:03 crc kubenswrapper[4705]: I0216 15:45:03.972872 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070254 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070575 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.071758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume" (OuterVolumeSpecName: "config-volume") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.077017 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k" (OuterVolumeSpecName: "kube-api-access-gvn2k") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "kube-api-access-gvn2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.077540 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174513 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174565 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174575 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerDied","Data":"506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c"} Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568660 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568735 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:05 crc kubenswrapper[4705]: I0216 15:45:05.065973 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:45:05 crc kubenswrapper[4705]: I0216 15:45:05.107550 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:45:06 crc kubenswrapper[4705]: E0216 15:45:06.437266 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:06 crc kubenswrapper[4705]: I0216 15:45:06.440064 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" path="/var/lib/kubelet/pods/24c9b6f2-f412-4860-9524-8b671c477f83/volumes" Feb 16 15:45:11 crc kubenswrapper[4705]: I0216 15:45:11.419479 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:11 crc kubenswrapper[4705]: E0216 15:45:11.420464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:14 crc kubenswrapper[4705]: E0216 15:45:14.429811 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:18 crc kubenswrapper[4705]: E0216 15:45:18.426549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:23 crc kubenswrapper[4705]: I0216 15:45:23.421178 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:23 crc kubenswrapper[4705]: E0216 15:45:23.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:29 crc kubenswrapper[4705]: E0216 15:45:29.423939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:32 crc kubenswrapper[4705]: E0216 15:45:32.424974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:35 crc kubenswrapper[4705]: I0216 15:45:35.139180 4705 scope.go:117] "RemoveContainer" containerID="6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365" Feb 16 15:45:38 crc kubenswrapper[4705]: I0216 15:45:38.420447 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:38 crc kubenswrapper[4705]: E0216 15:45:38.422648 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:44 crc kubenswrapper[4705]: I0216 15:45:44.428451 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.937207 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.937883 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.938136 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.939874 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966114 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966210 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966478 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.967820 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:49 crc kubenswrapper[4705]: I0216 15:45:49.421164 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:49 crc kubenswrapper[4705]: E0216 15:45:49.422906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:57 crc kubenswrapper[4705]: E0216 15:45:57.423210 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:59 crc kubenswrapper[4705]: E0216 15:45:59.422120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:04 crc kubenswrapper[4705]: I0216 15:46:04.419881 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:04 crc kubenswrapper[4705]: E0216 15:46:04.420823 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:08 crc kubenswrapper[4705]: E0216 15:46:08.422495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:14 crc kubenswrapper[4705]: E0216 15:46:14.423068 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:16 crc kubenswrapper[4705]: I0216 15:46:16.427975 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:16 crc kubenswrapper[4705]: E0216 15:46:16.428668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:22 crc kubenswrapper[4705]: E0216 15:46:22.422320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:27 crc kubenswrapper[4705]: E0216 15:46:27.420913 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:29 crc kubenswrapper[4705]: I0216 15:46:29.176993 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 15:46:30 crc kubenswrapper[4705]: I0216 15:46:30.420591 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:30 crc kubenswrapper[4705]: E0216 15:46:30.421171 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:34 crc kubenswrapper[4705]: E0216 15:46:34.422941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:42 crc kubenswrapper[4705]: E0216 15:46:42.424962 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:43 crc kubenswrapper[4705]: I0216 15:46:43.421729 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:43 crc kubenswrapper[4705]: E0216 15:46:43.422166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:46 crc kubenswrapper[4705]: E0216 15:46:46.432448 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:55 crc kubenswrapper[4705]: I0216 15:46:55.420352 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:55 crc kubenswrapper[4705]: E0216 15:46:55.421259 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:57 crc kubenswrapper[4705]: E0216 15:46:57.422744 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:58 crc kubenswrapper[4705]: E0216 15:46:58.423946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:08 crc kubenswrapper[4705]: E0216 15:47:08.422543 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:09 crc kubenswrapper[4705]: I0216 15:47:09.420776 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:09 crc kubenswrapper[4705]: E0216 15:47:09.422093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:10 crc kubenswrapper[4705]: E0216 15:47:10.423581 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.930267 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:16 crc kubenswrapper[4705]: E0216 15:47:16.931599 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.931617 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.931974 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.934184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.946018 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976201 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976520 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079421 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.080010 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.080237 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.101751 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.273022 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.936306 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:18 crc kubenswrapper[4705]: I0216 15:47:18.180923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"c815de1a318bd321678c402001f3fc4a11a959753a5ea9a79d6f02d5a2ff47ff"} Feb 16 15:47:19 crc kubenswrapper[4705]: I0216 15:47:19.206068 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" exitCode=0 Feb 16 15:47:19 crc kubenswrapper[4705]: I0216 15:47:19.206306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14"} Feb 16 15:47:20 crc kubenswrapper[4705]: I0216 15:47:20.224035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} Feb 16 15:47:20 crc kubenswrapper[4705]: I0216 15:47:20.420454 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:20 crc kubenswrapper[4705]: E0216 15:47:20.421526 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:20 crc kubenswrapper[4705]: E0216 15:47:20.421587 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:21 crc kubenswrapper[4705]: E0216 15:47:21.422423 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:22 crc kubenswrapper[4705]: I0216 15:47:22.248206 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" exitCode=0 Feb 16 15:47:22 crc kubenswrapper[4705]: I0216 15:47:22.248351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} Feb 16 15:47:23 crc kubenswrapper[4705]: I0216 15:47:23.261245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} Feb 16 15:47:23 crc kubenswrapper[4705]: I0216 15:47:23.294738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nr2gj" podStartSLOduration=3.826215517 podStartE2EDuration="7.294720852s" podCreationTimestamp="2026-02-16 15:47:16 +0000 UTC" firstStartedPulling="2026-02-16 15:47:19.210785673 +0000 UTC m=+3233.395762749" lastFinishedPulling="2026-02-16 15:47:22.679290968 +0000 UTC m=+3236.864268084" observedRunningTime="2026-02-16 15:47:23.288087395 +0000 UTC m=+3237.473064551" watchObservedRunningTime="2026-02-16 15:47:23.294720852 +0000 UTC m=+3237.479697928" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.274640 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.275607 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.349743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.407083 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.614452 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.331326 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nr2gj" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" containerID="cri-o://3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" gracePeriod=2 Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.900668 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981861 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.982268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities" (OuterVolumeSpecName: "utilities") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.983082 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.988435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt" (OuterVolumeSpecName: "kube-api-access-mhwwt") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "kube-api-access-mhwwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.035242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.084332 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.084416 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347397 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" exitCode=0 Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347481 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"c815de1a318bd321678c402001f3fc4a11a959753a5ea9a79d6f02d5a2ff47ff"} Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347539 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347567 4705 scope.go:117] "RemoveContainer" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.381134 4705 scope.go:117] "RemoveContainer" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.418741 4705 scope.go:117] "RemoveContainer" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.439128 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.439184 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.481271 4705 scope.go:117] "RemoveContainer" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.482075 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": container with ID starting with 3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b not found: ID does not exist" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482152 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} err="failed to get container status \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": rpc error: code = NotFound desc = could not find container \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": container with ID starting with 3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b not found: ID does not exist" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482191 4705 scope.go:117] "RemoveContainer" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.482636 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": container with ID starting with ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41 not found: ID does not exist" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482686 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} err="failed to get container status \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": rpc error: code = NotFound desc = could not find container \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": container with ID starting with ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41 not found: ID does not exist" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482717 4705 scope.go:117] "RemoveContainer" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.483016 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": container with ID starting with 21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14 not found: ID does not exist" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.483046 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14"} err="failed to get container status \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": rpc error: code = NotFound desc = could not find container \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": container with ID starting with 21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14 not found: ID does not exist" Feb 16 15:47:31 crc kubenswrapper[4705]: E0216 15:47:31.422895 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:32 crc kubenswrapper[4705]: I0216 15:47:32.431461 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" path="/var/lib/kubelet/pods/f830efc9-fda9-4d23-9348-7f07420d7006/volumes" Feb 16 15:47:34 crc kubenswrapper[4705]: E0216 15:47:34.422129 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:35 crc kubenswrapper[4705]: I0216 15:47:35.419898 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:35 crc kubenswrapper[4705]: E0216 15:47:35.420437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:45 crc kubenswrapper[4705]: E0216 15:47:45.423549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:49 crc kubenswrapper[4705]: E0216 15:47:49.423302 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:50 crc kubenswrapper[4705]: I0216 15:47:50.422422 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:50 crc kubenswrapper[4705]: E0216 15:47:50.423044 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:59 crc kubenswrapper[4705]: E0216 15:47:59.423818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:03 crc kubenswrapper[4705]: I0216 15:48:03.420228 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:03 crc kubenswrapper[4705]: E0216 15:48:03.421164 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:03 crc kubenswrapper[4705]: E0216 15:48:03.421687 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:14 crc kubenswrapper[4705]: E0216 15:48:14.423153 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:16 crc kubenswrapper[4705]: E0216 15:48:16.432355 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:17 crc kubenswrapper[4705]: I0216 15:48:17.419665 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:17 crc kubenswrapper[4705]: E0216 15:48:17.420511 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:27 crc kubenswrapper[4705]: E0216 15:48:27.424197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:27 crc kubenswrapper[4705]: E0216 15:48:27.424272 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:28 crc kubenswrapper[4705]: I0216 15:48:28.421147 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:28 crc kubenswrapper[4705]: E0216 15:48:28.422289 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:39 crc kubenswrapper[4705]: E0216 15:48:39.423231 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:41 crc kubenswrapper[4705]: E0216 15:48:41.421735 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:42 crc kubenswrapper[4705]: I0216 15:48:42.419998 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:43 crc kubenswrapper[4705]: I0216 15:48:43.341608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} Feb 16 15:48:54 crc kubenswrapper[4705]: E0216 15:48:54.424479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:55 crc kubenswrapper[4705]: E0216 15:48:55.423012 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:05 crc kubenswrapper[4705]: E0216 15:49:05.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:08 crc kubenswrapper[4705]: E0216 15:49:08.423196 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:17 crc kubenswrapper[4705]: E0216 15:49:17.425187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:23 crc kubenswrapper[4705]: E0216 15:49:23.423148 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:32 crc kubenswrapper[4705]: E0216 15:49:32.422466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:34 crc kubenswrapper[4705]: E0216 15:49:34.421815 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:45 crc kubenswrapper[4705]: E0216 15:49:45.422250 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:47 crc kubenswrapper[4705]: E0216 15:49:47.424615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:59 crc kubenswrapper[4705]: E0216 15:49:59.424978 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:59 crc kubenswrapper[4705]: E0216 15:49:59.425258 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:12 crc kubenswrapper[4705]: E0216 15:50:12.422255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:14 crc kubenswrapper[4705]: E0216 15:50:14.421759 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:23 crc kubenswrapper[4705]: E0216 15:50:23.422083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:27 crc kubenswrapper[4705]: E0216 15:50:27.422818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.254489 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255562 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-utilities" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-utilities" Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255587 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-content" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255593 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-content" Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255606 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255614 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255850 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.257634 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.272305 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.332770 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.332839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.333057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.435943 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.436032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.436102 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.437009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.437251 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.455485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.630073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.154255 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.734933 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" exitCode=0 Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.735278 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d"} Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.735330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"449e1847ce5c6224e7f6503e083f2d4afc066c34398cfa6124ed5426ddeb28b3"} Feb 16 15:50:34 crc kubenswrapper[4705]: I0216 15:50:34.760056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} Feb 16 15:50:35 crc kubenswrapper[4705]: I0216 15:50:35.777137 4705 generic.go:334] "Generic (PLEG): container finished" podID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerID="dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a" exitCode=2 Feb 16 15:50:35 crc kubenswrapper[4705]: I0216 15:50:35.777240 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerDied","Data":"dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a"} Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.343102 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.401876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.402092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.402122 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.413702 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl" (OuterVolumeSpecName: "kube-api-access-km2dl") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "kube-api-access-km2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.434748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory" (OuterVolumeSpecName: "inventory") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.435837 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506406 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506448 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506459 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerDied","Data":"842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4"} Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802132 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802143 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:50:38 crc kubenswrapper[4705]: E0216 15:50:38.426725 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:38 crc kubenswrapper[4705]: I0216 15:50:38.816781 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" exitCode=0 Feb 16 15:50:38 crc kubenswrapper[4705]: I0216 15:50:38.816860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} Feb 16 15:50:39 crc kubenswrapper[4705]: E0216 15:50:39.422874 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:40 crc kubenswrapper[4705]: I0216 15:50:40.841959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} Feb 16 15:50:40 crc kubenswrapper[4705]: I0216 15:50:40.873153 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bmp9d" podStartSLOduration=2.41151906 podStartE2EDuration="9.873135336s" podCreationTimestamp="2026-02-16 15:50:31 +0000 UTC" firstStartedPulling="2026-02-16 15:50:32.738252364 +0000 UTC m=+3426.923229460" lastFinishedPulling="2026-02-16 15:50:40.19986865 +0000 UTC m=+3434.384845736" observedRunningTime="2026-02-16 15:50:40.868147035 +0000 UTC m=+3435.053124131" watchObservedRunningTime="2026-02-16 15:50:40.873135336 +0000 UTC m=+3435.058112412" Feb 16 15:50:41 crc kubenswrapper[4705]: I0216 15:50:41.630284 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:41 crc kubenswrapper[4705]: I0216 15:50:41.630332 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:42 crc kubenswrapper[4705]: I0216 15:50:42.695023 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bmp9d" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" probeResult="failure" output=< Feb 16 15:50:42 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:50:42 crc kubenswrapper[4705]: > Feb 16 15:50:49 crc kubenswrapper[4705]: I0216 15:50:49.423088 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.551808 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.551883 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.552026 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.553223 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.703197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.785196 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.950773 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:52 crc kubenswrapper[4705]: I0216 15:50:52.992152 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bmp9d" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" containerID="cri-o://dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" gracePeriod=2 Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.519430 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.670647 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.670791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.671040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.671830 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities" (OuterVolumeSpecName: "utilities") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.677667 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd" (OuterVolumeSpecName: "kube-api-access-6bgkd") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "kube-api-access-6bgkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.773698 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.773740 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.812828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.875936 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011080 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" exitCode=0 Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011124 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"449e1847ce5c6224e7f6503e083f2d4afc066c34398cfa6124ed5426ddeb28b3"} Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011171 4705 scope.go:117] "RemoveContainer" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.053471 4705 scope.go:117] "RemoveContainer" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.059873 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.081119 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.083693 4705 scope.go:117] "RemoveContainer" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.138464 4705 scope.go:117] "RemoveContainer" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.138974 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": container with ID starting with dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01 not found: ID does not exist" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139005 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} err="failed to get container status \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": rpc error: code = NotFound desc = could not find container \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": container with ID starting with dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01 not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139025 4705 scope.go:117] "RemoveContainer" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.139581 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": container with ID starting with 10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb not found: ID does not exist" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139628 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} err="failed to get container status \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": rpc error: code = NotFound desc = could not find container \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": container with ID starting with 10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139644 4705 scope.go:117] "RemoveContainer" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.139966 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": container with ID starting with 730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d not found: ID does not exist" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139985 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d"} err="failed to get container status \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": rpc error: code = NotFound desc = could not find container \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": container with ID starting with 730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.437195 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b012865d-7789-4025-b085-85099262b2e7" path="/var/lib/kubelet/pods/b012865d-7789-4025-b085-85099262b2e7/volumes" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538222 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538308 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538516 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.539758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.010870 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012224 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012240 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012255 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012262 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012297 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-utilities" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012305 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-utilities" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012317 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-content" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012324 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-content" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012785 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012801 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.014558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.044543 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193112 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.295868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296994 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.297003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.316285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.345002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: W0216 15:50:58.828697 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f0ae40_309a_42b8_b7c3_63d7d0dccdd4.slice/crio-c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f WatchSource:0}: Error finding container c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f: Status 404 returned error can't find the container with id c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.831341 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081162 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" exitCode=0 Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60"} Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081513 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f"} Feb 16 15:51:00 crc kubenswrapper[4705]: I0216 15:51:00.095084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} Feb 16 15:51:01 crc kubenswrapper[4705]: I0216 15:51:01.684094 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:51:01 crc kubenswrapper[4705]: I0216 15:51:01.684573 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:51:02 crc kubenswrapper[4705]: I0216 15:51:02.115747 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" exitCode=0 Feb 16 15:51:02 crc kubenswrapper[4705]: I0216 15:51:02.115798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} Feb 16 15:51:03 crc kubenswrapper[4705]: I0216 15:51:03.130042 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} Feb 16 15:51:03 crc kubenswrapper[4705]: I0216 15:51:03.156583 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mmp5l" podStartSLOduration=2.724151968 podStartE2EDuration="6.156560821s" podCreationTimestamp="2026-02-16 15:50:57 +0000 UTC" firstStartedPulling="2026-02-16 15:50:59.083466954 +0000 UTC m=+3453.268444030" lastFinishedPulling="2026-02-16 15:51:02.515875807 +0000 UTC m=+3456.700852883" observedRunningTime="2026-02-16 15:51:03.149779769 +0000 UTC m=+3457.334756865" watchObservedRunningTime="2026-02-16 15:51:03.156560821 +0000 UTC m=+3457.341537897" Feb 16 15:51:03 crc kubenswrapper[4705]: E0216 15:51:03.420673 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.345951 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.346496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.409648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:09 crc kubenswrapper[4705]: I0216 15:51:09.281607 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:09 crc kubenswrapper[4705]: I0216 15:51:09.362426 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:09 crc kubenswrapper[4705]: E0216 15:51:09.422881 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.228035 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mmp5l" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" containerID="cri-o://4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" gracePeriod=2 Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.798272 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982571 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.983204 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities" (OuterVolumeSpecName: "utilities") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.988654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542" (OuterVolumeSpecName: "kube-api-access-65542") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "kube-api-access-65542". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.045676 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085709 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085750 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085762 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.242921 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" exitCode=0 Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.243566 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.244079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f"} Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.243669 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.244131 4705 scope.go:117] "RemoveContainer" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.265688 4705 scope.go:117] "RemoveContainer" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.291598 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.306166 4705 scope.go:117] "RemoveContainer" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.310871 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383046 4705 scope.go:117] "RemoveContainer" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.383586 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": container with ID starting with 4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67 not found: ID does not exist" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383621 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} err="failed to get container status \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": rpc error: code = NotFound desc = could not find container \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": container with ID starting with 4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67 not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383645 4705 scope.go:117] "RemoveContainer" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.384033 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": container with ID starting with 9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d not found: ID does not exist" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384057 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} err="failed to get container status \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": rpc error: code = NotFound desc = could not find container \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": container with ID starting with 9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384072 4705 scope.go:117] "RemoveContainer" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.384597 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": container with ID starting with c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60 not found: ID does not exist" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384617 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60"} err="failed to get container status \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": rpc error: code = NotFound desc = could not find container \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": container with ID starting with c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60 not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.431899 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" path="/var/lib/kubelet/pods/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4/volumes" Feb 16 15:51:17 crc kubenswrapper[4705]: E0216 15:51:17.423226 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:20 crc kubenswrapper[4705]: E0216 15:51:20.422535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:30 crc kubenswrapper[4705]: E0216 15:51:30.422692 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:31 crc kubenswrapper[4705]: I0216 15:51:31.684708 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:51:31 crc kubenswrapper[4705]: I0216 15:51:31.685170 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:51:33 crc kubenswrapper[4705]: E0216 15:51:33.421909 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:43 crc kubenswrapper[4705]: E0216 15:51:43.423609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:47 crc kubenswrapper[4705]: E0216 15:51:47.423320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.048428 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.051722 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-content" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.051857 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-content" Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.051974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-utilities" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052053 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-utilities" Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.052174 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052260 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052731 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.054289 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.056933 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.057215 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.057419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.058214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.076793 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.331746 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.331942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.332139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.339422 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.340018 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.357901 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.388180 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.997292 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:56 crc kubenswrapper[4705]: I0216 15:51:56.758298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerStarted","Data":"7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c"} Feb 16 15:51:57 crc kubenswrapper[4705]: E0216 15:51:57.423594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:57 crc kubenswrapper[4705]: I0216 15:51:57.778424 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerStarted","Data":"e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98"} Feb 16 15:51:57 crc kubenswrapper[4705]: I0216 15:51:57.818466 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" podStartSLOduration=2.237688361 podStartE2EDuration="2.818427308s" podCreationTimestamp="2026-02-16 15:51:55 +0000 UTC" firstStartedPulling="2026-02-16 15:51:56.004699985 +0000 UTC m=+3510.189677061" lastFinishedPulling="2026-02-16 15:51:56.585438932 +0000 UTC m=+3510.770416008" observedRunningTime="2026-02-16 15:51:57.80965878 +0000 UTC m=+3511.994635856" watchObservedRunningTime="2026-02-16 15:51:57.818427308 +0000 UTC m=+3512.003404404" Feb 16 15:52:01 crc kubenswrapper[4705]: E0216 15:52:01.422653 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.686959 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.687645 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.687748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.689918 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.690065 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" gracePeriod=600 Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824467 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" exitCode=0 Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824571 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:52:02 crc kubenswrapper[4705]: I0216 15:52:02.839164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} Feb 16 15:52:09 crc kubenswrapper[4705]: E0216 15:52:09.423694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:15 crc kubenswrapper[4705]: E0216 15:52:15.422402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:24 crc kubenswrapper[4705]: E0216 15:52:24.424524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:27 crc kubenswrapper[4705]: E0216 15:52:27.421751 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:38 crc kubenswrapper[4705]: E0216 15:52:38.422095 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:40 crc kubenswrapper[4705]: E0216 15:52:40.421939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:53 crc kubenswrapper[4705]: E0216 15:52:53.422198 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:53 crc kubenswrapper[4705]: E0216 15:52:53.422197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:05 crc kubenswrapper[4705]: E0216 15:53:05.423826 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:07 crc kubenswrapper[4705]: E0216 15:53:07.421528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:17 crc kubenswrapper[4705]: E0216 15:53:17.421562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:19 crc kubenswrapper[4705]: E0216 15:53:19.421209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:30 crc kubenswrapper[4705]: E0216 15:53:30.423781 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:31 crc kubenswrapper[4705]: E0216 15:53:31.422376 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:42 crc kubenswrapper[4705]: E0216 15:53:42.422030 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:46 crc kubenswrapper[4705]: E0216 15:53:46.431584 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:53 crc kubenswrapper[4705]: E0216 15:53:53.423944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:59 crc kubenswrapper[4705]: E0216 15:53:59.422602 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:08 crc kubenswrapper[4705]: E0216 15:54:08.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:11 crc kubenswrapper[4705]: E0216 15:54:11.422688 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:23 crc kubenswrapper[4705]: E0216 15:54:23.422851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:25 crc kubenswrapper[4705]: E0216 15:54:25.421946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:31 crc kubenswrapper[4705]: I0216 15:54:31.684083 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:54:31 crc kubenswrapper[4705]: I0216 15:54:31.684712 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:54:36 crc kubenswrapper[4705]: E0216 15:54:36.428535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:38 crc kubenswrapper[4705]: E0216 15:54:38.441565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:50 crc kubenswrapper[4705]: E0216 15:54:50.423846 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:50 crc kubenswrapper[4705]: E0216 15:54:50.427317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:01 crc kubenswrapper[4705]: E0216 15:55:01.422068 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:01 crc kubenswrapper[4705]: I0216 15:55:01.684710 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:01 crc kubenswrapper[4705]: I0216 15:55:01.684759 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:55:03 crc kubenswrapper[4705]: E0216 15:55:03.422626 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:12 crc kubenswrapper[4705]: E0216 15:55:12.425169 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:15 crc kubenswrapper[4705]: E0216 15:55:15.422136 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:23 crc kubenswrapper[4705]: E0216 15:55:23.423199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:29 crc kubenswrapper[4705]: E0216 15:55:29.422737 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.683895 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.684448 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.684494 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.685389 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.685438 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" gracePeriod=600 Feb 16 15:55:31 crc kubenswrapper[4705]: E0216 15:55:31.808548 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462608 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" exitCode=0 Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462726 4705 scope.go:117] "RemoveContainer" containerID="4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.464818 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:32 crc kubenswrapper[4705]: E0216 15:55:32.465219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:34 crc kubenswrapper[4705]: E0216 15:55:34.423734 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:40 crc kubenswrapper[4705]: E0216 15:55:40.421946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:44 crc kubenswrapper[4705]: I0216 15:55:44.419272 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:44 crc kubenswrapper[4705]: E0216 15:55:44.421178 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:49 crc kubenswrapper[4705]: E0216 15:55:49.441941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:52 crc kubenswrapper[4705]: I0216 15:55:52.424043 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.546850 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.546935 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.547094 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.549131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:59 crc kubenswrapper[4705]: I0216 15:55:59.420439 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:59 crc kubenswrapper[4705]: E0216 15:55:59.421309 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.514986 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.515642 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.515803 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.516918 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:05 crc kubenswrapper[4705]: E0216 15:56:05.422399 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:12 crc kubenswrapper[4705]: I0216 15:56:12.419682 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:12 crc kubenswrapper[4705]: E0216 15:56:12.420503 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:16 crc kubenswrapper[4705]: E0216 15:56:16.437900 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:18 crc kubenswrapper[4705]: E0216 15:56:18.422711 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:25 crc kubenswrapper[4705]: I0216 15:56:25.419278 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:25 crc kubenswrapper[4705]: E0216 15:56:25.420050 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:29 crc kubenswrapper[4705]: E0216 15:56:29.423498 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:32 crc kubenswrapper[4705]: E0216 15:56:32.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:37 crc kubenswrapper[4705]: I0216 15:56:37.420348 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:37 crc kubenswrapper[4705]: E0216 15:56:37.421249 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:43 crc kubenswrapper[4705]: E0216 15:56:43.420791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:46 crc kubenswrapper[4705]: E0216 15:56:46.432087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:48 crc kubenswrapper[4705]: I0216 15:56:48.420193 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:48 crc kubenswrapper[4705]: E0216 15:56:48.421330 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:56 crc kubenswrapper[4705]: E0216 15:56:56.427889 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:59 crc kubenswrapper[4705]: E0216 15:56:59.423469 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:00 crc kubenswrapper[4705]: I0216 15:57:00.420662 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:00 crc kubenswrapper[4705]: E0216 15:57:00.421356 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:07 crc kubenswrapper[4705]: E0216 15:57:07.423061 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:13 crc kubenswrapper[4705]: E0216 15:57:13.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:15 crc kubenswrapper[4705]: I0216 15:57:15.421041 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:15 crc kubenswrapper[4705]: E0216 15:57:15.421667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:22 crc kubenswrapper[4705]: E0216 15:57:22.423801 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:24 crc kubenswrapper[4705]: E0216 15:57:24.423823 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:26 crc kubenswrapper[4705]: I0216 15:57:26.435643 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:26 crc kubenswrapper[4705]: E0216 15:57:26.436641 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:36 crc kubenswrapper[4705]: E0216 15:57:36.433299 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.714471 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.721139 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.728973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.806971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.807110 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.807151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.909801 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.909987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910428 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.936572 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.050808 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.651200 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.877659 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be"} Feb 16 15:57:38 crc kubenswrapper[4705]: E0216 15:57:38.421288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:38 crc kubenswrapper[4705]: I0216 15:57:38.889068 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da" exitCode=0 Feb 16 15:57:38 crc kubenswrapper[4705]: I0216 15:57:38.889118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da"} Feb 16 15:57:39 crc kubenswrapper[4705]: I0216 15:57:39.903298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56"} Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.419819 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:41 crc kubenswrapper[4705]: E0216 15:57:41.420497 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.927960 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56" exitCode=0 Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.928039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56"} Feb 16 15:57:42 crc kubenswrapper[4705]: I0216 15:57:42.952731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504"} Feb 16 15:57:42 crc kubenswrapper[4705]: I0216 15:57:42.975707 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrzlm" podStartSLOduration=3.498798971 podStartE2EDuration="6.975689661s" podCreationTimestamp="2026-02-16 15:57:36 +0000 UTC" firstStartedPulling="2026-02-16 15:57:38.891602716 +0000 UTC m=+3853.076579792" lastFinishedPulling="2026-02-16 15:57:42.368493406 +0000 UTC m=+3856.553470482" observedRunningTime="2026-02-16 15:57:42.975204787 +0000 UTC m=+3857.160181893" watchObservedRunningTime="2026-02-16 15:57:42.975689661 +0000 UTC m=+3857.160666737" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.052331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.052696 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.103973 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:48 crc kubenswrapper[4705]: I0216 15:57:48.089550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:48 crc kubenswrapper[4705]: I0216 15:57:48.163209 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:50 crc kubenswrapper[4705]: I0216 15:57:50.036834 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrzlm" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" containerID="cri-o://433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" gracePeriod=2 Feb 16 15:57:50 crc kubenswrapper[4705]: E0216 15:57:50.421668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.051924 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" exitCode=0 Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.051973 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504"} Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.052047 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be"} Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.052064 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.147579 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.214653 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.215308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.215354 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.217183 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities" (OuterVolumeSpecName: "utilities") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.232728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf" (OuterVolumeSpecName: "kube-api-access-z7mqf") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "kube-api-access-z7mqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.293581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318133 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318164 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318174 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: E0216 15:57:51.421268 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.063320 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.109410 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.122681 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.469222 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" path="/var/lib/kubelet/pods/82137727-e2d9-404a-9a97-f6a02ee6f25f/volumes" Feb 16 15:57:53 crc kubenswrapper[4705]: I0216 15:57:53.420789 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:53 crc kubenswrapper[4705]: E0216 15:57:53.421364 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:04 crc kubenswrapper[4705]: I0216 15:58:04.419932 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:04 crc kubenswrapper[4705]: E0216 15:58:04.420866 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:04 crc kubenswrapper[4705]: E0216 15:58:04.421959 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:05 crc kubenswrapper[4705]: E0216 15:58:05.422102 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:14 crc kubenswrapper[4705]: I0216 15:58:14.378249 4705 generic.go:334] "Generic (PLEG): container finished" podID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerID="e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98" exitCode=2 Feb 16 15:58:14 crc kubenswrapper[4705]: I0216 15:58:14.378356 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerDied","Data":"e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98"} Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.006718 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078693 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.087877 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977" (OuterVolumeSpecName: "kube-api-access-5r977") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "kube-api-access-5r977". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.109538 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.123781 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory" (OuterVolumeSpecName: "inventory") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.181939 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.182022 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.182081 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerDied","Data":"7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c"} Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399234 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399241 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:58:16 crc kubenswrapper[4705]: E0216 15:58:16.421394 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:16 crc kubenswrapper[4705]: E0216 15:58:16.421970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:19 crc kubenswrapper[4705]: I0216 15:58:19.419782 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:19 crc kubenswrapper[4705]: E0216 15:58:19.420668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:27 crc kubenswrapper[4705]: E0216 15:58:27.422565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:31 crc kubenswrapper[4705]: I0216 15:58:31.420239 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:31 crc kubenswrapper[4705]: E0216 15:58:31.420824 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:31 crc kubenswrapper[4705]: E0216 15:58:31.422214 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:39 crc kubenswrapper[4705]: E0216 15:58:39.422203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:44 crc kubenswrapper[4705]: E0216 15:58:44.421970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:45 crc kubenswrapper[4705]: I0216 15:58:45.419762 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:45 crc kubenswrapper[4705]: E0216 15:58:45.420078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:50 crc kubenswrapper[4705]: E0216 15:58:50.423568 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:55 crc kubenswrapper[4705]: E0216 15:58:55.423181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:56 crc kubenswrapper[4705]: I0216 15:58:56.432011 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:56 crc kubenswrapper[4705]: E0216 15:58:56.432542 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:04 crc kubenswrapper[4705]: E0216 15:59:04.423790 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:06 crc kubenswrapper[4705]: E0216 15:59:06.434135 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:08 crc kubenswrapper[4705]: I0216 15:59:08.419587 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:08 crc kubenswrapper[4705]: E0216 15:59:08.420636 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:18 crc kubenswrapper[4705]: E0216 15:59:18.422575 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:19 crc kubenswrapper[4705]: E0216 15:59:19.424944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:23 crc kubenswrapper[4705]: I0216 15:59:23.419909 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:23 crc kubenswrapper[4705]: E0216 15:59:23.420929 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:33 crc kubenswrapper[4705]: E0216 15:59:33.422594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:33 crc kubenswrapper[4705]: E0216 15:59:33.422622 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:34 crc kubenswrapper[4705]: I0216 15:59:34.419270 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:34 crc kubenswrapper[4705]: E0216 15:59:34.419937 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:46 crc kubenswrapper[4705]: E0216 15:59:46.429984 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:46 crc kubenswrapper[4705]: E0216 15:59:46.430198 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:48 crc kubenswrapper[4705]: I0216 15:59:48.420002 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:48 crc kubenswrapper[4705]: E0216 15:59:48.420809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:59 crc kubenswrapper[4705]: E0216 15:59:59.422525 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.189006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190254 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190415 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190542 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-utilities" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190627 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-utilities" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190727 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-content" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190802 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-content" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190927 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191013 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191430 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191598 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.192943 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.224344 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.225740 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.270921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.271156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.271198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375124 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.390791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.732704 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.732963 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.739884 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.744109 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.003111 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.445643 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.789490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerStarted","Data":"566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671"} Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.789951 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerStarted","Data":"d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5"} Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.825057 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" podStartSLOduration=1.825035008 podStartE2EDuration="1.825035008s" podCreationTimestamp="2026-02-16 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:00:01.813256874 +0000 UTC m=+3995.998233950" watchObservedRunningTime="2026-02-16 16:00:01.825035008 +0000 UTC m=+3996.010012084" Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.419586 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:02 crc kubenswrapper[4705]: E0216 16:00:02.419869 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.810192 4705 generic.go:334] "Generic (PLEG): container finished" podID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerID="566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671" exitCode=0 Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.810277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerDied","Data":"566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671"} Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.293183 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427615 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427715 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.428611 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume" (OuterVolumeSpecName: "config-volume") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.443681 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.443746 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h" (OuterVolumeSpecName: "kube-api-access-nxr2h") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "kube-api-access-nxr2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.521576 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531466 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531508 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531519 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.534403 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerDied","Data":"d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5"} Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831854 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831859 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5" Feb 16 16:00:06 crc kubenswrapper[4705]: I0216 16:00:06.434133 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" path="/var/lib/kubelet/pods/4c6f056a-614c-4e3d-9bfe-de451b1d951d/volumes" Feb 16 16:00:12 crc kubenswrapper[4705]: E0216 16:00:12.421650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:13 crc kubenswrapper[4705]: E0216 16:00:13.422318 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:16 crc kubenswrapper[4705]: I0216 16:00:16.428405 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:16 crc kubenswrapper[4705]: E0216 16:00:16.429173 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:25 crc kubenswrapper[4705]: E0216 16:00:25.422158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:27 crc kubenswrapper[4705]: E0216 16:00:27.421282 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:31 crc kubenswrapper[4705]: I0216 16:00:31.420443 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:31 crc kubenswrapper[4705]: E0216 16:00:31.421227 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:35 crc kubenswrapper[4705]: I0216 16:00:35.666184 4705 scope.go:117] "RemoveContainer" containerID="12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f" Feb 16 16:00:39 crc kubenswrapper[4705]: E0216 16:00:39.422407 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:41 crc kubenswrapper[4705]: E0216 16:00:41.423084 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:42 crc kubenswrapper[4705]: I0216 16:00:42.420189 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:43 crc kubenswrapper[4705]: I0216 16:00:43.310223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} Feb 16 16:00:52 crc kubenswrapper[4705]: E0216 16:00:52.422544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042044 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: E0216 16:00:53.042585 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042606 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042856 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.043936 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.047161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.047677 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.048844 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.050067 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.067669 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.236376 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.236942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.237117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.338939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.339048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.339179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.346187 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.347971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.357698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.368771 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.939987 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.952951 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:00:54 crc kubenswrapper[4705]: E0216 16:00:54.421017 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:54 crc kubenswrapper[4705]: I0216 16:00:54.446195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerStarted","Data":"f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e"} Feb 16 16:00:55 crc kubenswrapper[4705]: I0216 16:00:55.457641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerStarted","Data":"7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b"} Feb 16 16:00:55 crc kubenswrapper[4705]: I0216 16:00:55.481353 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" podStartSLOduration=1.8201837360000002 podStartE2EDuration="2.481325226s" podCreationTimestamp="2026-02-16 16:00:53 +0000 UTC" firstStartedPulling="2026-02-16 16:00:53.952630803 +0000 UTC m=+4048.137607889" lastFinishedPulling="2026-02-16 16:00:54.613772303 +0000 UTC m=+4048.798749379" observedRunningTime="2026-02-16 16:00:55.475171412 +0000 UTC m=+4049.660148488" watchObservedRunningTime="2026-02-16 16:00:55.481325226 +0000 UTC m=+4049.666302302" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.162252 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.165130 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.178161 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.334790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.335958 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.336029 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.336498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441793 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441892 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.442023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.832840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.832941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.834185 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.836468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:01 crc kubenswrapper[4705]: I0216 16:01:01.091441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:01 crc kubenswrapper[4705]: I0216 16:01:01.587135 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.540546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerStarted","Data":"a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a"} Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.540883 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerStarted","Data":"f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811"} Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.560915 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520961-75mxg" podStartSLOduration=2.560893866 podStartE2EDuration="2.560893866s" podCreationTimestamp="2026-02-16 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:01:02.555347039 +0000 UTC m=+4056.740324115" watchObservedRunningTime="2026-02-16 16:01:02.560893866 +0000 UTC m=+4056.745870942" Feb 16 16:01:05 crc kubenswrapper[4705]: I0216 16:01:05.577131 4705 generic.go:334] "Generic (PLEG): container finished" podID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerID="a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a" exitCode=0 Feb 16 16:01:05 crc kubenswrapper[4705]: I0216 16:01:05.577185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerDied","Data":"a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a"} Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.564760 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.565072 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.565225 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.566434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.043539 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.223971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224097 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.230871 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.236529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm" (OuterVolumeSpecName: "kube-api-access-bs8dm") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "kube-api-access-bs8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.267937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.319506 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data" (OuterVolumeSpecName: "config-data") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330037 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330100 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330118 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330135 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606294 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerDied","Data":"f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811"} Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606341 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606490 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.548556 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.549208 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.549442 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.550931 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:18 crc kubenswrapper[4705]: E0216 16:01:18.422345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:22 crc kubenswrapper[4705]: E0216 16:01:22.432316 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.749798 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:23 crc kubenswrapper[4705]: E0216 16:01:23.750792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.750811 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.751118 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.753460 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764923 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764932 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764994 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867452 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867988 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.890306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.107058 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.623896 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.814916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"6764853331df6a6460f33d1474eb9cab471934aabdc993a1e48b65054f9958a8"} Feb 16 16:01:25 crc kubenswrapper[4705]: I0216 16:01:25.828438 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" exitCode=0 Feb 16 16:01:25 crc kubenswrapper[4705]: I0216 16:01:25.828641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0"} Feb 16 16:01:27 crc kubenswrapper[4705]: I0216 16:01:27.857574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} Feb 16 16:01:31 crc kubenswrapper[4705]: E0216 16:01:31.423851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:32 crc kubenswrapper[4705]: I0216 16:01:32.920758 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" exitCode=0 Feb 16 16:01:32 crc kubenswrapper[4705]: I0216 16:01:32.920805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} Feb 16 16:01:33 crc kubenswrapper[4705]: I0216 16:01:33.935134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} Feb 16 16:01:33 crc kubenswrapper[4705]: I0216 16:01:33.968856 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5fgwc" podStartSLOduration=3.461783887 podStartE2EDuration="10.968833334s" podCreationTimestamp="2026-02-16 16:01:23 +0000 UTC" firstStartedPulling="2026-02-16 16:01:25.830497191 +0000 UTC m=+4080.015474267" lastFinishedPulling="2026-02-16 16:01:33.337546638 +0000 UTC m=+4087.522523714" observedRunningTime="2026-02-16 16:01:33.966529498 +0000 UTC m=+4088.151506614" watchObservedRunningTime="2026-02-16 16:01:33.968833334 +0000 UTC m=+4088.153810400" Feb 16 16:01:34 crc kubenswrapper[4705]: I0216 16:01:34.107716 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:34 crc kubenswrapper[4705]: I0216 16:01:34.107766 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:34 crc kubenswrapper[4705]: E0216 16:01:34.423938 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:35 crc kubenswrapper[4705]: I0216 16:01:35.184519 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:35 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:35 crc kubenswrapper[4705]: > Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.926196 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.929315 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.946279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.973830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.973899 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.974116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.078076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.130225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.257570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.847743 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:42 crc kubenswrapper[4705]: I0216 16:01:42.066650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670"} Feb 16 16:01:42 crc kubenswrapper[4705]: E0216 16:01:42.422319 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:43 crc kubenswrapper[4705]: I0216 16:01:43.081252 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72" exitCode=0 Feb 16 16:01:43 crc kubenswrapper[4705]: I0216 16:01:43.081350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72"} Feb 16 16:01:44 crc kubenswrapper[4705]: I0216 16:01:44.109430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673"} Feb 16 16:01:45 crc kubenswrapper[4705]: I0216 16:01:45.169932 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:45 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:45 crc kubenswrapper[4705]: > Feb 16 16:01:45 crc kubenswrapper[4705]: E0216 16:01:45.422332 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:47 crc kubenswrapper[4705]: I0216 16:01:47.141116 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673" exitCode=0 Feb 16 16:01:47 crc kubenswrapper[4705]: I0216 16:01:47.141179 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673"} Feb 16 16:01:48 crc kubenswrapper[4705]: I0216 16:01:48.154728 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081"} Feb 16 16:01:48 crc kubenswrapper[4705]: I0216 16:01:48.186347 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pbhc" podStartSLOduration=3.714002632 podStartE2EDuration="8.186323325s" podCreationTimestamp="2026-02-16 16:01:40 +0000 UTC" firstStartedPulling="2026-02-16 16:01:43.084565269 +0000 UTC m=+4097.269542345" lastFinishedPulling="2026-02-16 16:01:47.556885962 +0000 UTC m=+4101.741863038" observedRunningTime="2026-02-16 16:01:48.176716333 +0000 UTC m=+4102.361693409" watchObservedRunningTime="2026-02-16 16:01:48.186323325 +0000 UTC m=+4102.371300411" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.258425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.259041 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.872946 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:55 crc kubenswrapper[4705]: I0216 16:01:55.163375 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:55 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:55 crc kubenswrapper[4705]: > Feb 16 16:01:56 crc kubenswrapper[4705]: E0216 16:01:56.435119 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:00 crc kubenswrapper[4705]: E0216 16:02:00.424445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:01 crc kubenswrapper[4705]: I0216 16:02:01.308561 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:03 crc kubenswrapper[4705]: I0216 16:02:03.966154 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:03 crc kubenswrapper[4705]: I0216 16:02:03.966838 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pbhc" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" containerID="cri-o://1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" gracePeriod=2 Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.195278 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.257271 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554546 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" exitCode=0 Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554642 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081"} Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670"} Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554978 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.611713 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742881 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities" (OuterVolumeSpecName: "utilities") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.744495 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.748180 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs" (OuterVolumeSpecName: "kube-api-access-ffrhs") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "kube-api-access-ffrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.795160 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.846334 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.846658 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.564515 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.603017 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.613243 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.437938 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" path="/var/lib/kubelet/pods/170bdaa1-dc08-4282-955b-debf707fd9f1/volumes" Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.556533 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.557024 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" containerID="cri-o://787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" gracePeriod=2 Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.097648 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212970 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.214654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities" (OuterVolumeSpecName: "utilities") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.230124 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb" (OuterVolumeSpecName: "kube-api-access-rrtgb") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "kube-api-access-rrtgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.318081 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.318135 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.372184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.420300 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590416 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" exitCode=0 Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590478 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590498 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.591100 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"6764853331df6a6460f33d1474eb9cab471934aabdc993a1e48b65054f9958a8"} Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.591181 4705 scope.go:117] "RemoveContainer" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.621384 4705 scope.go:117] "RemoveContainer" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.625483 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.638148 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.648420 4705 scope.go:117] "RemoveContainer" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704291 4705 scope.go:117] "RemoveContainer" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.704920 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": container with ID starting with 787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6 not found: ID does not exist" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704951 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} err="failed to get container status \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": rpc error: code = NotFound desc = could not find container \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": container with ID starting with 787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6 not found: ID does not exist" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704974 4705 scope.go:117] "RemoveContainer" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.705325 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": container with ID starting with 1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98 not found: ID does not exist" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705354 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} err="failed to get container status \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": rpc error: code = NotFound desc = could not find container \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": container with ID starting with 1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98 not found: ID does not exist" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705415 4705 scope.go:117] "RemoveContainer" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.705703 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": container with ID starting with 3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0 not found: ID does not exist" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705736 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0"} err="failed to get container status \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": rpc error: code = NotFound desc = could not find container \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": container with ID starting with 3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0 not found: ID does not exist" Feb 16 16:02:08 crc kubenswrapper[4705]: I0216 16:02:08.432411 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a762e5-ea54-48f8-855c-71726ce18208" path="/var/lib/kubelet/pods/45a762e5-ea54-48f8-855c-71726ce18208/volumes" Feb 16 16:02:11 crc kubenswrapper[4705]: E0216 16:02:11.423400 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:14 crc kubenswrapper[4705]: E0216 16:02:14.421710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.431020 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.431726 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.558898 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559666 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559689 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559709 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559719 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559734 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559745 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559787 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559832 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559841 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559884 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559895 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.560200 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.560225 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.562649 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.569286 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.751867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.752458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.752670 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.855349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.855918 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.877453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.906020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:27 crc kubenswrapper[4705]: I0216 16:02:27.421590 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:27 crc kubenswrapper[4705]: I0216 16:02:27.881574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"f394747612343fb22c3e0f1891ddb10d8664c98075ae50493bdf58ba26dfbcb6"} Feb 16 16:02:28 crc kubenswrapper[4705]: I0216 16:02:28.914193 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" exitCode=0 Feb 16 16:02:28 crc kubenswrapper[4705]: I0216 16:02:28.914692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b"} Feb 16 16:02:30 crc kubenswrapper[4705]: I0216 16:02:30.944225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} Feb 16 16:02:32 crc kubenswrapper[4705]: I0216 16:02:32.976664 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" exitCode=0 Feb 16 16:02:32 crc kubenswrapper[4705]: I0216 16:02:32.976748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} Feb 16 16:02:33 crc kubenswrapper[4705]: I0216 16:02:33.994063 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} Feb 16 16:02:34 crc kubenswrapper[4705]: I0216 16:02:34.026200 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngq5z" podStartSLOduration=3.502986609 podStartE2EDuration="8.02617476s" podCreationTimestamp="2026-02-16 16:02:26 +0000 UTC" firstStartedPulling="2026-02-16 16:02:28.917842159 +0000 UTC m=+4143.102819235" lastFinishedPulling="2026-02-16 16:02:33.44103031 +0000 UTC m=+4147.626007386" observedRunningTime="2026-02-16 16:02:34.020805849 +0000 UTC m=+4148.205782915" watchObservedRunningTime="2026-02-16 16:02:34.02617476 +0000 UTC m=+4148.211151836" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.907419 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.908005 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.959474 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:41 crc kubenswrapper[4705]: E0216 16:02:41.423007 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:41 crc kubenswrapper[4705]: E0216 16:02:41.423060 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:46 crc kubenswrapper[4705]: I0216 16:02:46.963457 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.022802 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.118710 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngq5z" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" containerID="cri-o://d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" gracePeriod=2 Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.683096 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.761833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.767634 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.767829 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.769112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities" (OuterVolumeSpecName: "utilities") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.773541 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn" (OuterVolumeSpecName: "kube-api-access-sjrcn") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "kube-api-access-sjrcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.791812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871939 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871976 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871990 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132426 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" exitCode=0 Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132498 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132499 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"f394747612343fb22c3e0f1891ddb10d8664c98075ae50493bdf58ba26dfbcb6"} Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.133032 4705 scope.go:117] "RemoveContainer" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.185603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.187877 4705 scope.go:117] "RemoveContainer" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.200072 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.216943 4705 scope.go:117] "RemoveContainer" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.446512 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" path="/var/lib/kubelet/pods/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3/volumes" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.944584 4705 scope.go:117] "RemoveContainer" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.945853 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": container with ID starting with d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd not found: ID does not exist" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946068 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} err="failed to get container status \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": rpc error: code = NotFound desc = could not find container \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": container with ID starting with d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd not found: ID does not exist" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946213 4705 scope.go:117] "RemoveContainer" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.946866 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": container with ID starting with b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd not found: ID does not exist" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946905 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} err="failed to get container status \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": rpc error: code = NotFound desc = could not find container \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": container with ID starting with b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd not found: ID does not exist" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946923 4705 scope.go:117] "RemoveContainer" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.947265 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": container with ID starting with 3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b not found: ID does not exist" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.947416 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b"} err="failed to get container status \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": rpc error: code = NotFound desc = could not find container \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": container with ID starting with 3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b not found: ID does not exist" Feb 16 16:02:56 crc kubenswrapper[4705]: E0216 16:02:56.428209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:56 crc kubenswrapper[4705]: E0216 16:02:56.428225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:01 crc kubenswrapper[4705]: I0216 16:03:01.684128 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:03:01 crc kubenswrapper[4705]: I0216 16:03:01.684596 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:03:09 crc kubenswrapper[4705]: E0216 16:03:09.422493 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:11 crc kubenswrapper[4705]: E0216 16:03:11.421625 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:22 crc kubenswrapper[4705]: E0216 16:03:22.421776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:23 crc kubenswrapper[4705]: E0216 16:03:23.422203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:31 crc kubenswrapper[4705]: I0216 16:03:31.686949 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:03:31 crc kubenswrapper[4705]: I0216 16:03:31.687466 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:03:35 crc kubenswrapper[4705]: E0216 16:03:35.424435 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:37 crc kubenswrapper[4705]: E0216 16:03:37.421845 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:46 crc kubenswrapper[4705]: E0216 16:03:46.429807 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:48 crc kubenswrapper[4705]: E0216 16:03:48.422013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:59 crc kubenswrapper[4705]: E0216 16:03:59.422675 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:01 crc kubenswrapper[4705]: E0216 16:04:01.422126 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684010 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684076 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684124 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.685136 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.685214 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" gracePeriod=600 Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.957749 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" exitCode=0 Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.957796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.958143 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:04:02 crc kubenswrapper[4705]: I0216 16:04:02.971146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} Feb 16 16:04:11 crc kubenswrapper[4705]: E0216 16:04:11.422146 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:13 crc kubenswrapper[4705]: E0216 16:04:13.422475 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:24 crc kubenswrapper[4705]: E0216 16:04:24.421640 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:26 crc kubenswrapper[4705]: E0216 16:04:26.438721 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.815857 4705 scope.go:117] "RemoveContainer" containerID="433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.850723 4705 scope.go:117] "RemoveContainer" containerID="fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.875298 4705 scope.go:117] "RemoveContainer" containerID="42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da" Feb 16 16:04:39 crc kubenswrapper[4705]: E0216 16:04:39.421930 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:39 crc kubenswrapper[4705]: E0216 16:04:39.422094 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:51 crc kubenswrapper[4705]: E0216 16:04:51.422039 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:53 crc kubenswrapper[4705]: E0216 16:04:53.421772 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:02 crc kubenswrapper[4705]: E0216 16:05:02.421226 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:06 crc kubenswrapper[4705]: E0216 16:05:06.435807 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:15 crc kubenswrapper[4705]: E0216 16:05:15.428285 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:19 crc kubenswrapper[4705]: E0216 16:05:19.423642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:26 crc kubenswrapper[4705]: E0216 16:05:26.431582 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:33 crc kubenswrapper[4705]: E0216 16:05:33.422852 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:39 crc kubenswrapper[4705]: E0216 16:05:39.421572 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:46 crc kubenswrapper[4705]: E0216 16:05:46.431208 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:52 crc kubenswrapper[4705]: E0216 16:05:52.421888 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:59 crc kubenswrapper[4705]: E0216 16:05:59.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:07 crc kubenswrapper[4705]: I0216 16:06:07.423335 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555628 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555699 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555898 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.557139 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.515773 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.516279 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.516457 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.517685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:21 crc kubenswrapper[4705]: E0216 16:06:21.423228 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:24 crc kubenswrapper[4705]: E0216 16:06:24.422681 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:31 crc kubenswrapper[4705]: I0216 16:06:31.684423 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:06:31 crc kubenswrapper[4705]: I0216 16:06:31.685070 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:06:35 crc kubenswrapper[4705]: E0216 16:06:35.425158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:36 crc kubenswrapper[4705]: E0216 16:06:36.429963 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:47 crc kubenswrapper[4705]: E0216 16:06:47.425022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:47 crc kubenswrapper[4705]: E0216 16:06:47.425049 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:58 crc kubenswrapper[4705]: E0216 16:06:58.424961 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:00 crc kubenswrapper[4705]: E0216 16:07:00.914239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:01 crc kubenswrapper[4705]: I0216 16:07:01.684976 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:07:01 crc kubenswrapper[4705]: I0216 16:07:01.685397 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:07:09 crc kubenswrapper[4705]: E0216 16:07:09.420243 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:11 crc kubenswrapper[4705]: I0216 16:07:11.165497 4705 generic.go:334] "Generic (PLEG): container finished" podID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerID="7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b" exitCode=2 Feb 16 16:07:11 crc kubenswrapper[4705]: I0216 16:07:11.165546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerDied","Data":"7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b"} Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.640232 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.775906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.777141 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.779656 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerDied","Data":"f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e"} Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186738 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186814 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.469662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8" (OuterVolumeSpecName: "kube-api-access-687w8") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "kube-api-access-687w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.503358 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.632578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.632972 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory" (OuterVolumeSpecName: "inventory") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.709299 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.709341 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: E0216 16:07:13.950322 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896e8ac5_e84c_41d6_a6e5_638c9b5cae1c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896e8ac5_e84c_41d6_a6e5_638c9b5cae1c.slice/crio-f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e\": RecentStats: unable to find data in memory cache]" Feb 16 16:07:14 crc kubenswrapper[4705]: E0216 16:07:14.423353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:24 crc kubenswrapper[4705]: E0216 16:07:24.422991 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:28 crc kubenswrapper[4705]: E0216 16:07:28.425365 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.684655 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.685496 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.685589 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.686972 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.687048 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" gracePeriod=600 Feb 16 16:07:31 crc kubenswrapper[4705]: E0216 16:07:31.838534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.419861 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" exitCode=0 Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.441646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.442519 4705 scope.go:117] "RemoveContainer" containerID="314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.443241 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:32 crc kubenswrapper[4705]: E0216 16:07:32.443707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:37 crc kubenswrapper[4705]: E0216 16:07:37.423498 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:42 crc kubenswrapper[4705]: E0216 16:07:42.423905 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:45 crc kubenswrapper[4705]: I0216 16:07:45.420654 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:45 crc kubenswrapper[4705]: E0216 16:07:45.421855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:52 crc kubenswrapper[4705]: E0216 16:07:52.422282 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:54 crc kubenswrapper[4705]: E0216 16:07:54.422875 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:58 crc kubenswrapper[4705]: I0216 16:07:58.420224 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:58 crc kubenswrapper[4705]: E0216 16:07:58.420987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.501828 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502785 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502806 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502819 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-content" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502826 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-content" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-utilities" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502901 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-utilities" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502924 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502932 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.503320 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.503359 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.505555 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.519388 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.609300 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.610006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.610141 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.713899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714527 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714617 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.741120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.885101 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.433025 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784204 4705 generic.go:334] "Generic (PLEG): container finished" podID="ffc91527-f266-408e-9dad-4ded626632f6" containerID="ae7cf3cd2f47a26ad351f8c456f7e740fd52e36d4a7570bfefa2c8028acc7e73" exitCode=0 Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerDied","Data":"ae7cf3cd2f47a26ad351f8c456f7e740fd52e36d4a7570bfefa2c8028acc7e73"} Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerStarted","Data":"0a4259fbc128ee2d7bf7c2e29feea589ef20f27af6c8c3dae6c3f0c0796fcf6b"} Feb 16 16:08:06 crc kubenswrapper[4705]: I0216 16:08:06.855150 4705 generic.go:334] "Generic (PLEG): container finished" podID="ffc91527-f266-408e-9dad-4ded626632f6" containerID="634d5466f4c08d5c0f3e8701b771a7f27de757b9de7c5e15a184498af2f83b05" exitCode=0 Feb 16 16:08:06 crc kubenswrapper[4705]: I0216 16:08:06.855262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerDied","Data":"634d5466f4c08d5c0f3e8701b771a7f27de757b9de7c5e15a184498af2f83b05"} Feb 16 16:08:07 crc kubenswrapper[4705]: E0216 16:08:07.428255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:07 crc kubenswrapper[4705]: E0216 16:08:07.428408 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:07 crc kubenswrapper[4705]: I0216 16:08:07.871468 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerStarted","Data":"a91119866673d2c98754bedfce7058d15c91ded7ca173c332b245ae41c080a8b"} Feb 16 16:08:07 crc kubenswrapper[4705]: I0216 16:08:07.902748 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9zjsp" podStartSLOduration=2.453860107 podStartE2EDuration="7.902715639s" podCreationTimestamp="2026-02-16 16:08:00 +0000 UTC" firstStartedPulling="2026-02-16 16:08:01.786585828 +0000 UTC m=+4475.971562904" lastFinishedPulling="2026-02-16 16:08:07.23544136 +0000 UTC m=+4481.420418436" observedRunningTime="2026-02-16 16:08:07.902027539 +0000 UTC m=+4482.087004635" watchObservedRunningTime="2026-02-16 16:08:07.902715639 +0000 UTC m=+4482.087692725" Feb 16 16:08:09 crc kubenswrapper[4705]: I0216 16:08:09.419475 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:09 crc kubenswrapper[4705]: E0216 16:08:09.420087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.886219 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.888528 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.956670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:12 crc kubenswrapper[4705]: I0216 16:08:12.882580 4705 trace.go:236] Trace[1790360609]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (16-Feb-2026 16:08:11.698) (total time: 1179ms): Feb 16 16:08:12 crc kubenswrapper[4705]: Trace[1790360609]: [1.179522529s] [1.179522529s] END Feb 16 16:08:20 crc kubenswrapper[4705]: I0216 16:08:20.421506 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:20 crc kubenswrapper[4705]: E0216 16:08:20.423147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:20 crc kubenswrapper[4705]: E0216 16:08:20.423320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.006846 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.194907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.260566 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.260922 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2v29" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" containerID="cri-o://0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" gracePeriod=2 Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.860484 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999426 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999481 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999770 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.002581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities" (OuterVolumeSpecName: "utilities") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.021461 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm" (OuterVolumeSpecName: "kube-api-access-t8jvm") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "kube-api-access-t8jvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068298 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" exitCode=0 Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"abefdacd3131f9637e18b5d6a682929bf8b75c5123f9e2a087bae18c0b3b4aa0"} Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068435 4705 scope.go:117] "RemoveContainer" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068654 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.084040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.098802 4705 scope.go:117] "RemoveContainer" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104127 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104305 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104364 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.139054 4705 scope.go:117] "RemoveContainer" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.200955 4705 scope.go:117] "RemoveContainer" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.201564 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": container with ID starting with 0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8 not found: ID does not exist" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.201637 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} err="failed to get container status \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": rpc error: code = NotFound desc = could not find container \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": container with ID starting with 0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.201667 4705 scope.go:117] "RemoveContainer" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.202065 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": container with ID starting with 07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703 not found: ID does not exist" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202112 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703"} err="failed to get container status \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": rpc error: code = NotFound desc = could not find container \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": container with ID starting with 07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202138 4705 scope.go:117] "RemoveContainer" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.202438 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": container with ID starting with 08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88 not found: ID does not exist" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202492 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88"} err="failed to get container status \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": rpc error: code = NotFound desc = could not find container \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": container with ID starting with 08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.422010 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.444599 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.452902 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:24 crc kubenswrapper[4705]: I0216 16:08:24.435329 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" path="/var/lib/kubelet/pods/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634/volumes" Feb 16 16:08:33 crc kubenswrapper[4705]: E0216 16:08:33.423251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:35 crc kubenswrapper[4705]: I0216 16:08:35.419737 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:35 crc kubenswrapper[4705]: E0216 16:08:35.420442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:35 crc kubenswrapper[4705]: E0216 16:08:35.421886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.049450 4705 scope.go:117] "RemoveContainer" containerID="1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.114929 4705 scope.go:117] "RemoveContainer" containerID="4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.141836 4705 scope.go:117] "RemoveContainer" containerID="0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673" Feb 16 16:08:45 crc kubenswrapper[4705]: E0216 16:08:45.421156 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:46 crc kubenswrapper[4705]: I0216 16:08:46.428777 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:46 crc kubenswrapper[4705]: E0216 16:08:46.429145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:47 crc kubenswrapper[4705]: E0216 16:08:47.423843 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:58 crc kubenswrapper[4705]: E0216 16:08:58.423789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:59 crc kubenswrapper[4705]: I0216 16:08:59.419410 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:59 crc kubenswrapper[4705]: E0216 16:08:59.419940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:00 crc kubenswrapper[4705]: E0216 16:09:00.422776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:10 crc kubenswrapper[4705]: I0216 16:09:10.420576 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:10 crc kubenswrapper[4705]: E0216 16:09:10.421791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:13 crc kubenswrapper[4705]: E0216 16:09:13.421523 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:14 crc kubenswrapper[4705]: E0216 16:09:14.420790 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:21 crc kubenswrapper[4705]: I0216 16:09:21.420189 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:21 crc kubenswrapper[4705]: E0216 16:09:21.421032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:25 crc kubenswrapper[4705]: E0216 16:09:25.422605 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:29 crc kubenswrapper[4705]: E0216 16:09:29.422098 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:36 crc kubenswrapper[4705]: I0216 16:09:36.434167 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:36 crc kubenswrapper[4705]: E0216 16:09:36.435555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:40 crc kubenswrapper[4705]: E0216 16:09:40.423747 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:42 crc kubenswrapper[4705]: E0216 16:09:42.421683 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:48 crc kubenswrapper[4705]: I0216 16:09:48.420278 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:48 crc kubenswrapper[4705]: E0216 16:09:48.421329 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:51 crc kubenswrapper[4705]: E0216 16:09:51.423191 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:55 crc kubenswrapper[4705]: E0216 16:09:55.422105 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:59 crc kubenswrapper[4705]: I0216 16:09:59.420726 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:59 crc kubenswrapper[4705]: E0216 16:09:59.421248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:06 crc kubenswrapper[4705]: E0216 16:10:06.429069 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:08 crc kubenswrapper[4705]: E0216 16:10:08.421995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:12 crc kubenswrapper[4705]: I0216 16:10:12.420482 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:12 crc kubenswrapper[4705]: E0216 16:10:12.422809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:17 crc kubenswrapper[4705]: E0216 16:10:17.423678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:21 crc kubenswrapper[4705]: E0216 16:10:21.426594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:26 crc kubenswrapper[4705]: I0216 16:10:26.441790 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:26 crc kubenswrapper[4705]: E0216 16:10:26.446957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:29 crc kubenswrapper[4705]: E0216 16:10:29.423254 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:36 crc kubenswrapper[4705]: E0216 16:10:36.438402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:40 crc kubenswrapper[4705]: I0216 16:10:40.419735 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:40 crc kubenswrapper[4705]: E0216 16:10:40.420397 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:40 crc kubenswrapper[4705]: E0216 16:10:40.421699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:50 crc kubenswrapper[4705]: E0216 16:10:50.422939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:55 crc kubenswrapper[4705]: I0216 16:10:55.419520 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:55 crc kubenswrapper[4705]: E0216 16:10:55.420832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:55 crc kubenswrapper[4705]: E0216 16:10:55.421830 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:03 crc kubenswrapper[4705]: E0216 16:11:03.430921 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:07 crc kubenswrapper[4705]: E0216 16:11:07.422343 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:10 crc kubenswrapper[4705]: I0216 16:11:10.419981 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:10 crc kubenswrapper[4705]: E0216 16:11:10.421139 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:14 crc kubenswrapper[4705]: I0216 16:11:14.423041 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548762 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548828 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548983 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.550184 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.560861 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.561729 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.561878 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.563218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:23 crc kubenswrapper[4705]: I0216 16:11:23.420249 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:23 crc kubenswrapper[4705]: E0216 16:11:23.421434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:28 crc kubenswrapper[4705]: E0216 16:11:28.431099 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:31 crc kubenswrapper[4705]: E0216 16:11:31.424219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:38 crc kubenswrapper[4705]: I0216 16:11:38.421043 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:38 crc kubenswrapper[4705]: E0216 16:11:38.422022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.409535 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410793 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410843 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410908 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-utilities" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410926 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-utilities" Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410954 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-content" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410967 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-content" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.411350 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.413424 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.428540 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669603 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.670111 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.670181 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:40 crc kubenswrapper[4705]: I0216 16:11:40.358670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:40 crc kubenswrapper[4705]: I0216 16:11:40.646911 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:41 crc kubenswrapper[4705]: I0216 16:11:41.250640 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:41 crc kubenswrapper[4705]: W0216 16:11:41.255406 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e1b6744_00c5_44f1_a5e6_0056eef02141.slice/crio-c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318 WatchSource:0}: Error finding container c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318: Status 404 returned error can't find the container with id c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318 Feb 16 16:11:41 crc kubenswrapper[4705]: I0216 16:11:41.393258 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318"} Feb 16 16:11:42 crc kubenswrapper[4705]: I0216 16:11:42.409039 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" exitCode=0 Feb 16 16:11:42 crc kubenswrapper[4705]: I0216 16:11:42.409484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086"} Feb 16 16:11:43 crc kubenswrapper[4705]: E0216 16:11:43.422694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:43 crc kubenswrapper[4705]: I0216 16:11:43.428316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} Feb 16 16:11:45 crc kubenswrapper[4705]: E0216 16:11:45.423699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:48 crc kubenswrapper[4705]: I0216 16:11:48.492498 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" exitCode=0 Feb 16 16:11:48 crc kubenswrapper[4705]: I0216 16:11:48.492644 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} Feb 16 16:11:49 crc kubenswrapper[4705]: I0216 16:11:49.506429 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} Feb 16 16:11:49 crc kubenswrapper[4705]: I0216 16:11:49.536581 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sch5q" podStartSLOduration=4.038745708 podStartE2EDuration="10.536563312s" podCreationTimestamp="2026-02-16 16:11:39 +0000 UTC" firstStartedPulling="2026-02-16 16:11:42.413200479 +0000 UTC m=+4696.598177565" lastFinishedPulling="2026-02-16 16:11:48.911018093 +0000 UTC m=+4703.095995169" observedRunningTime="2026-02-16 16:11:49.533551697 +0000 UTC m=+4703.718528783" watchObservedRunningTime="2026-02-16 16:11:49.536563312 +0000 UTC m=+4703.721540378" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.420544 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:50 crc kubenswrapper[4705]: E0216 16:11:50.420849 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.647358 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.647482 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:51 crc kubenswrapper[4705]: I0216 16:11:51.768810 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:11:51 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:11:51 crc kubenswrapper[4705]: > Feb 16 16:11:56 crc kubenswrapper[4705]: E0216 16:11:56.431028 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:59 crc kubenswrapper[4705]: E0216 16:11:59.423239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:02 crc kubenswrapper[4705]: I0216 16:12:02.419464 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:02 crc kubenswrapper[4705]: E0216 16:12:02.420081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:02 crc kubenswrapper[4705]: I0216 16:12:02.555137 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:12:02 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:12:02 crc kubenswrapper[4705]: > Feb 16 16:12:11 crc kubenswrapper[4705]: E0216 16:12:11.421485 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:11 crc kubenswrapper[4705]: I0216 16:12:11.698237 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:12:11 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:12:11 crc kubenswrapper[4705]: > Feb 16 16:12:14 crc kubenswrapper[4705]: E0216 16:12:14.422449 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.671069 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.674290 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.689760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719672 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822093 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822233 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822866 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.843632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.005954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.419696 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:17 crc kubenswrapper[4705]: E0216 16:12:17.420463 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.595655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:17 crc kubenswrapper[4705]: W0216 16:12:17.599715 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dba665a_c068_4b3b_aab8_2f915e391d01.slice/crio-e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45 WatchSource:0}: Error finding container e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45: Status 404 returned error can't find the container with id e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45 Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.830107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.830164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45"} Feb 16 16:12:18 crc kubenswrapper[4705]: I0216 16:12:18.847928 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" exitCode=0 Feb 16 16:12:18 crc kubenswrapper[4705]: I0216 16:12:18.848074 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.721713 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.778003 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.876656 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} Feb 16 16:12:21 crc kubenswrapper[4705]: I0216 16:12:21.889408 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" exitCode=0 Feb 16 16:12:21 crc kubenswrapper[4705]: I0216 16:12:21.889456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} Feb 16 16:12:22 crc kubenswrapper[4705]: I0216 16:12:22.901595 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} Feb 16 16:12:22 crc kubenswrapper[4705]: I0216 16:12:22.925673 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hdb5" podStartSLOduration=3.431526008 podStartE2EDuration="6.925653172s" podCreationTimestamp="2026-02-16 16:12:16 +0000 UTC" firstStartedPulling="2026-02-16 16:12:18.853577212 +0000 UTC m=+4733.038554328" lastFinishedPulling="2026-02-16 16:12:22.347704416 +0000 UTC m=+4736.532681492" observedRunningTime="2026-02-16 16:12:22.92557765 +0000 UTC m=+4737.110554726" watchObservedRunningTime="2026-02-16 16:12:22.925653172 +0000 UTC m=+4737.110630248" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.030835 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.031151 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" containerID="cri-o://add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" gracePeriod=2 Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.540819 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.608716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.608777 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.609008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.609783 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities" (OuterVolumeSpecName: "utilities") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.614212 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c" (OuterVolumeSpecName: "kube-api-access-jl42c") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "kube-api-access-jl42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.711998 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.712041 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.728643 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.814914 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914067 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" exitCode=0 Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914135 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914190 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318"} Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914211 4705 scope.go:117] "RemoveContainer" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.951967 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.957113 4705 scope.go:117] "RemoveContainer" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.962651 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.982626 4705 scope.go:117] "RemoveContainer" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039247 4705 scope.go:117] "RemoveContainer" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.039717 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": container with ID starting with add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb not found: ID does not exist" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039764 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} err="failed to get container status \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": rpc error: code = NotFound desc = could not find container \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": container with ID starting with add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039799 4705 scope.go:117] "RemoveContainer" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.040687 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": container with ID starting with 8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085 not found: ID does not exist" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.040720 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} err="failed to get container status \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": rpc error: code = NotFound desc = could not find container \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": container with ID starting with 8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085 not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.040741 4705 scope.go:117] "RemoveContainer" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.041044 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": container with ID starting with 3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086 not found: ID does not exist" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.041087 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086"} err="failed to get container status \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": rpc error: code = NotFound desc = could not find container \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": container with ID starting with 3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086 not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.432090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" path="/var/lib/kubelet/pods/2e1b6744-00c5-44f1-a5e6-0056eef02141/volumes" Feb 16 16:12:25 crc kubenswrapper[4705]: E0216 16:12:25.421615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:26 crc kubenswrapper[4705]: E0216 16:12:26.433750 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.006202 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.006335 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.211052 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:28 crc kubenswrapper[4705]: I0216 16:12:28.041810 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:29 crc kubenswrapper[4705]: I0216 16:12:29.230563 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:29 crc kubenswrapper[4705]: I0216 16:12:29.419771 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:29 crc kubenswrapper[4705]: E0216 16:12:29.420345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038229 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038848 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-content" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038867 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-content" Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038895 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-utilities" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038902 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-utilities" Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038937 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038942 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.039193 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.040163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.043314 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.044551 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.045043 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.046137 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.074526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.291495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.292276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.301279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.370814 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.948211 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: W0216 16:12:30.964699 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca989d06_e6a2_47cc_abc9_17d4c2740830.slice/crio-f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f WatchSource:0}: Error finding container f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f: Status 404 returned error can't find the container with id f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.015519 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerStarted","Data":"f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f"} Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.015646 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hdb5" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" containerID="cri-o://f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" gracePeriod=2 Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.705766 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722332 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722539 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.723560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities" (OuterVolumeSpecName: "utilities") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.758678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269" (OuterVolumeSpecName: "kube-api-access-7m269") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "kube-api-access-7m269". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.827339 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.827420 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.830466 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.930184 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027012 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" exitCode=0 Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027079 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027108 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.028171 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.028222 4705 scope.go:117] "RemoveContainer" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.029522 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerStarted","Data":"8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.060332 4705 scope.go:117] "RemoveContainer" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.064511 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" podStartSLOduration=1.5815447969999998 podStartE2EDuration="2.06449504s" podCreationTimestamp="2026-02-16 16:12:30 +0000 UTC" firstStartedPulling="2026-02-16 16:12:30.968223242 +0000 UTC m=+4745.153200328" lastFinishedPulling="2026-02-16 16:12:31.451173495 +0000 UTC m=+4745.636150571" observedRunningTime="2026-02-16 16:12:32.054848148 +0000 UTC m=+4746.239825234" watchObservedRunningTime="2026-02-16 16:12:32.06449504 +0000 UTC m=+4746.249472106" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.090266 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.101828 4705 scope.go:117] "RemoveContainer" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.105599 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144027 4705 scope.go:117] "RemoveContainer" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.144785 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": container with ID starting with f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32 not found: ID does not exist" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144815 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} err="failed to get container status \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": rpc error: code = NotFound desc = could not find container \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": container with ID starting with f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144835 4705 scope.go:117] "RemoveContainer" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.145207 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": container with ID starting with e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7 not found: ID does not exist" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145242 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} err="failed to get container status \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": rpc error: code = NotFound desc = could not find container \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": container with ID starting with e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145255 4705 scope.go:117] "RemoveContainer" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.145700 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": container with ID starting with 9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188 not found: ID does not exist" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145789 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} err="failed to get container status \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": rpc error: code = NotFound desc = could not find container \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": container with ID starting with 9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.436810 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" path="/var/lib/kubelet/pods/6dba665a-c068-4b3b-aab8-2f915e391d01/volumes" Feb 16 16:12:36 crc kubenswrapper[4705]: E0216 16:12:36.432665 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:41 crc kubenswrapper[4705]: E0216 16:12:41.423991 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:44 crc kubenswrapper[4705]: I0216 16:12:44.421629 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:45 crc kubenswrapper[4705]: I0216 16:12:45.179449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} Feb 16 16:12:48 crc kubenswrapper[4705]: E0216 16:12:48.424326 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:53 crc kubenswrapper[4705]: E0216 16:12:53.422628 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:00 crc kubenswrapper[4705]: E0216 16:13:00.423185 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:08 crc kubenswrapper[4705]: E0216 16:13:08.425005 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:11 crc kubenswrapper[4705]: E0216 16:13:11.422613 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:19 crc kubenswrapper[4705]: E0216 16:13:19.422904 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:26 crc kubenswrapper[4705]: E0216 16:13:26.432398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:33 crc kubenswrapper[4705]: E0216 16:13:33.422819 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:39 crc kubenswrapper[4705]: E0216 16:13:39.425516 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:47 crc kubenswrapper[4705]: E0216 16:13:47.423077 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:54 crc kubenswrapper[4705]: E0216 16:13:54.422123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:58 crc kubenswrapper[4705]: E0216 16:13:58.424935 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:06 crc kubenswrapper[4705]: E0216 16:14:06.433534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:11 crc kubenswrapper[4705]: E0216 16:14:11.423059 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:17 crc kubenswrapper[4705]: E0216 16:14:17.422467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:24 crc kubenswrapper[4705]: E0216 16:14:24.422543 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:32 crc kubenswrapper[4705]: E0216 16:14:32.422013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:35 crc kubenswrapper[4705]: E0216 16:14:35.423588 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:46 crc kubenswrapper[4705]: E0216 16:14:46.435225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:47 crc kubenswrapper[4705]: E0216 16:14:47.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:58 crc kubenswrapper[4705]: E0216 16:14:58.425112 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:59 crc kubenswrapper[4705]: E0216 16:14:59.421247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.261622 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262499 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-content" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262535 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-content" Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262579 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-utilities" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262590 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-utilities" Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262655 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262666 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262963 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.264174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.267380 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.277752 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.283880 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422178 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.525277 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.525747 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.526044 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.526620 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.632268 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.633324 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.915789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:01 crc kubenswrapper[4705]: W0216 16:15:01.383841 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb25462ce_23b8_42a7_aeda_3a8c72505a1c.slice/crio-8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e WatchSource:0}: Error finding container 8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e: Status 404 returned error can't find the container with id 8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.384564 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.684838 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.685243 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969262 4705 generic.go:334] "Generic (PLEG): container finished" podID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerID="8b2f1168697d511f3681e813ab15d4f8950b127d44cb2e7a0f464220baa3ed20" exitCode=0 Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerDied","Data":"8b2f1168697d511f3681e813ab15d4f8950b127d44cb2e7a0f464220baa3ed20"} Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerStarted","Data":"8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e"} Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.403151 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.414299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.425021 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49" (OuterVolumeSpecName: "kube-api-access-68k49") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "kube-api-access-68k49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.517053 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.519611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.520828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.523013 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.523821 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.542389 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.625635 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.988937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerDied","Data":"8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e"} Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.988989 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.989111 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:04 crc kubenswrapper[4705]: I0216 16:15:04.500529 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 16:15:04 crc kubenswrapper[4705]: I0216 16:15:04.514321 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 16:15:06 crc kubenswrapper[4705]: I0216 16:15:06.433038 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" path="/var/lib/kubelet/pods/d7a4c227-649b-4c63-a135-9e62204fb5e6/volumes" Feb 16 16:15:10 crc kubenswrapper[4705]: E0216 16:15:10.423601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:11 crc kubenswrapper[4705]: E0216 16:15:11.422922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:21 crc kubenswrapper[4705]: E0216 16:15:21.423416 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:25 crc kubenswrapper[4705]: E0216 16:15:25.423106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:31 crc kubenswrapper[4705]: I0216 16:15:31.685068 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:15:31 crc kubenswrapper[4705]: I0216 16:15:31.685737 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:15:33 crc kubenswrapper[4705]: E0216 16:15:33.423047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:36 crc kubenswrapper[4705]: I0216 16:15:36.497810 4705 scope.go:117] "RemoveContainer" containerID="3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b" Feb 16 16:15:38 crc kubenswrapper[4705]: E0216 16:15:38.424251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:47 crc kubenswrapper[4705]: E0216 16:15:47.423252 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:52 crc kubenswrapper[4705]: E0216 16:15:52.424094 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:59 crc kubenswrapper[4705]: E0216 16:15:59.424879 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684025 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684524 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.685461 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.685516 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" gracePeriod=600 Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.725758 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" exitCode=0 Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726632 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:16:06 crc kubenswrapper[4705]: E0216 16:16:06.432928 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:12 crc kubenswrapper[4705]: E0216 16:16:12.424083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:20 crc kubenswrapper[4705]: I0216 16:16:20.424176 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.562744 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.562867 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.563217 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.564533 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.507291 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.508051 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.508220 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.509480 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:32 crc kubenswrapper[4705]: E0216 16:16:32.423743 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:38 crc kubenswrapper[4705]: E0216 16:16:38.425415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:47 crc kubenswrapper[4705]: E0216 16:16:47.421850 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:50 crc kubenswrapper[4705]: E0216 16:16:50.422558 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:01 crc kubenswrapper[4705]: E0216 16:17:01.425431 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:04 crc kubenswrapper[4705]: E0216 16:17:04.421728 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:13 crc kubenswrapper[4705]: E0216 16:17:13.423173 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:15 crc kubenswrapper[4705]: E0216 16:17:15.423463 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:28 crc kubenswrapper[4705]: E0216 16:17:28.422131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:29 crc kubenswrapper[4705]: E0216 16:17:29.423042 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:43 crc kubenswrapper[4705]: E0216 16:17:43.422007 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:43 crc kubenswrapper[4705]: E0216 16:17:43.422237 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:56 crc kubenswrapper[4705]: E0216 16:17:56.434123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:57 crc kubenswrapper[4705]: E0216 16:17:57.423736 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:09 crc kubenswrapper[4705]: E0216 16:18:09.424013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:10 crc kubenswrapper[4705]: E0216 16:18:10.424699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:21 crc kubenswrapper[4705]: E0216 16:18:21.426055 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:25 crc kubenswrapper[4705]: E0216 16:18:25.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:31 crc kubenswrapper[4705]: I0216 16:18:31.684276 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:18:31 crc kubenswrapper[4705]: I0216 16:18:31.684790 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:18:35 crc kubenswrapper[4705]: E0216 16:18:35.423514 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.912607 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:35 crc kubenswrapper[4705]: E0216 16:18:35.913871 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.913924 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.914596 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.918645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.933083 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.076859 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.077495 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.077734 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.181333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.181507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.182085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.186130 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.186630 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.216592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.262090 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.900921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787519 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906" exitCode=0 Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787588 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906"} Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787623 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478"} Feb 16 16:18:38 crc kubenswrapper[4705]: E0216 16:18:38.426172 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:38 crc kubenswrapper[4705]: I0216 16:18:38.804645 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef"} Feb 16 16:18:39 crc kubenswrapper[4705]: I0216 16:18:39.820836 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef" exitCode=0 Feb 16 16:18:39 crc kubenswrapper[4705]: I0216 16:18:39.820896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef"} Feb 16 16:18:41 crc kubenswrapper[4705]: I0216 16:18:41.849118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c"} Feb 16 16:18:41 crc kubenswrapper[4705]: I0216 16:18:41.882165 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptlxj" podStartSLOduration=4.364152852 podStartE2EDuration="6.882137831s" podCreationTimestamp="2026-02-16 16:18:35 +0000 UTC" firstStartedPulling="2026-02-16 16:18:37.790701891 +0000 UTC m=+5111.975678967" lastFinishedPulling="2026-02-16 16:18:40.30868683 +0000 UTC m=+5114.493663946" observedRunningTime="2026-02-16 16:18:41.873756974 +0000 UTC m=+5116.058734060" watchObservedRunningTime="2026-02-16 16:18:41.882137831 +0000 UTC m=+5116.067114907" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.263226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.263773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.340172 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:47 crc kubenswrapper[4705]: I0216 16:18:47.001771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:49 crc kubenswrapper[4705]: E0216 16:18:49.423841 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:49 crc kubenswrapper[4705]: I0216 16:18:49.966559 4705 generic.go:334] "Generic (PLEG): container finished" podID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerID="8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b" exitCode=2 Feb 16 16:18:49 crc kubenswrapper[4705]: I0216 16:18:49.966614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerDied","Data":"8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b"} Feb 16 16:18:50 crc kubenswrapper[4705]: E0216 16:18:50.421761 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.672952 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.676316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.685212 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733653 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835572 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.836181 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.836257 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.866324 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.003694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.576322 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679328 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679529 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.684232 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.689358 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll" (OuterVolumeSpecName: "kube-api-access-stfll") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "kube-api-access-stfll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.723552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory" (OuterVolumeSpecName: "inventory") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.732504 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785775 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785809 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785820 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.000526 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" exitCode=0 Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.001138 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.001207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"81f32445518ea8cbafd663f15aa0508e04266932532a988d329155b948f3a4be"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerDied","Data":"f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014423 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f" Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014501 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:18:53 crc kubenswrapper[4705]: I0216 16:18:53.030587 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} Feb 16 16:18:54 crc kubenswrapper[4705]: I0216 16:18:54.044241 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" exitCode=0 Feb 16 16:18:54 crc kubenswrapper[4705]: I0216 16:18:54.044293 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} Feb 16 16:18:55 crc kubenswrapper[4705]: I0216 16:18:55.062802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} Feb 16 16:18:55 crc kubenswrapper[4705]: I0216 16:18:55.096570 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jm2rk" podStartSLOduration=2.623505394 podStartE2EDuration="5.096542814s" podCreationTimestamp="2026-02-16 16:18:50 +0000 UTC" firstStartedPulling="2026-02-16 16:18:52.005486898 +0000 UTC m=+5126.190463974" lastFinishedPulling="2026-02-16 16:18:54.478524318 +0000 UTC m=+5128.663501394" observedRunningTime="2026-02-16 16:18:55.084170875 +0000 UTC m=+5129.269147951" watchObservedRunningTime="2026-02-16 16:18:55.096542814 +0000 UTC m=+5129.281519890" Feb 16 16:18:56 crc kubenswrapper[4705]: I0216 16:18:56.463361 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:56 crc kubenswrapper[4705]: I0216 16:18:56.463694 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" containerID="cri-o://8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" gracePeriod=2 Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.650538 4705 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 16 16:18:56 crc kubenswrapper[4705]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 16 16:18:56 crc kubenswrapper[4705]: fail startup Feb 16 16:18:56 crc kubenswrapper[4705]: , stdout: , stderr: , exit code -1 Feb 16 16:18:56 crc kubenswrapper[4705]: > containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.652186 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.653047 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.653184 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.659519 4705 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 16 16:18:56 crc kubenswrapper[4705]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 16 16:18:56 crc kubenswrapper[4705]: fail startup Feb 16 16:18:56 crc kubenswrapper[4705]: , stdout: , stderr: , exit code -1 Feb 16 16:18:56 crc kubenswrapper[4705]: > containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660422 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660926 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660981 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.092728 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" exitCode=0 Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.092848 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c"} Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.093238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478"} Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.093258 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.124151 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289218 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289785 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289896 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.290285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities" (OuterVolumeSpecName: "utilities") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.291646 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.298758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54" (OuterVolumeSpecName: "kube-api-access-mlf54") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "kube-api-access-mlf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.348831 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.395528 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.395588 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.104543 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.150815 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.159391 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.444317 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" path="/var/lib/kubelet/pods/d2cc514e-4501-4dde-a3ce-442097cf4824/volumes" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.004814 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.005887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.071580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.214164 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.684658 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.684737 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:19:02 crc kubenswrapper[4705]: I0216 16:19:02.271749 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:02 crc kubenswrapper[4705]: E0216 16:19:02.424648 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:02 crc kubenswrapper[4705]: E0216 16:19:02.425544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.166278 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jm2rk" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" containerID="cri-o://8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" gracePeriod=2 Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.720506 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789418 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789578 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.790219 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities" (OuterVolumeSpecName: "utilities") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.816699 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v" (OuterVolumeSpecName: "kube-api-access-csc6v") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "kube-api-access-csc6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.818305 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892622 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892692 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892707 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190802 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" exitCode=0 Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190866 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.191720 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"81f32445518ea8cbafd663f15aa0508e04266932532a988d329155b948f3a4be"} Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.191742 4705 scope.go:117] "RemoveContainer" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.228780 4705 scope.go:117] "RemoveContainer" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.251602 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.268565 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.285937 4705 scope.go:117] "RemoveContainer" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.364820 4705 scope.go:117] "RemoveContainer" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.365467 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": container with ID starting with 8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a not found: ID does not exist" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365512 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} err="failed to get container status \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": rpc error: code = NotFound desc = could not find container \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": container with ID starting with 8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365540 4705 scope.go:117] "RemoveContainer" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.365934 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": container with ID starting with 17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c not found: ID does not exist" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365974 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} err="failed to get container status \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": rpc error: code = NotFound desc = could not find container \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": container with ID starting with 17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.366006 4705 scope.go:117] "RemoveContainer" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.366565 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": container with ID starting with 788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731 not found: ID does not exist" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.366621 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731"} err="failed to get container status \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": rpc error: code = NotFound desc = could not find container \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": container with ID starting with 788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731 not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.432131 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" path="/var/lib/kubelet/pods/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a/volumes" Feb 16 16:19:17 crc kubenswrapper[4705]: E0216 16:19:17.422241 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:17 crc kubenswrapper[4705]: E0216 16:19:17.422241 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:28 crc kubenswrapper[4705]: E0216 16:19:28.430411 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:31 crc kubenswrapper[4705]: E0216 16:19:31.423100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684006 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684244 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684338 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.685300 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.685364 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" gracePeriod=600 Feb 16 16:19:31 crc kubenswrapper[4705]: E0216 16:19:31.814102 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611755 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" exitCode=0 Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611860 4705 scope.go:117] "RemoveContainer" containerID="d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.612990 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:32 crc kubenswrapper[4705]: E0216 16:19:32.613293 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:42 crc kubenswrapper[4705]: E0216 16:19:42.422256 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:43 crc kubenswrapper[4705]: I0216 16:19:43.425953 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:43 crc kubenswrapper[4705]: E0216 16:19:43.426741 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:43 crc kubenswrapper[4705]: E0216 16:19:43.426995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:53 crc kubenswrapper[4705]: E0216 16:19:53.421642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:54 crc kubenswrapper[4705]: E0216 16:19:54.421762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:55 crc kubenswrapper[4705]: I0216 16:19:55.419814 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:55 crc kubenswrapper[4705]: E0216 16:19:55.420391 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:04 crc kubenswrapper[4705]: E0216 16:20:04.421532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:06 crc kubenswrapper[4705]: E0216 16:20:06.428577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:10 crc kubenswrapper[4705]: I0216 16:20:10.420212 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:10 crc kubenswrapper[4705]: E0216 16:20:10.420975 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.312775 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.313974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.313991 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314016 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314025 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314035 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314042 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314068 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314076 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314103 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314113 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314138 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314144 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314156 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314163 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314497 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314522 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314538 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.316577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323393 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bbqmf"/"openshift-service-ca.crt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323804 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bbqmf"/"kube-root-ca.crt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323625 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bbqmf"/"default-dockercfg-crlth" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.348797 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.429352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.429452 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.532074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.532156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.533916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.230152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.243241 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.822572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:16 crc kubenswrapper[4705]: I0216 16:20:16.118860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"c6c40e0f334072f7d56c077890f939b9cbaea7957db41512667c103bfd229c9c"} Feb 16 16:20:16 crc kubenswrapper[4705]: E0216 16:20:16.430125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:21 crc kubenswrapper[4705]: E0216 16:20:21.422496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:23 crc kubenswrapper[4705]: I0216 16:20:23.420330 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:23 crc kubenswrapper[4705]: E0216 16:20:23.430401 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:27 crc kubenswrapper[4705]: E0216 16:20:27.422454 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:29 crc kubenswrapper[4705]: I0216 16:20:29.277022 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd"} Feb 16 16:20:29 crc kubenswrapper[4705]: I0216 16:20:29.277626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} Feb 16 16:20:30 crc kubenswrapper[4705]: I0216 16:20:30.309535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" podStartSLOduration=3.55589947 podStartE2EDuration="16.309507568s" podCreationTimestamp="2026-02-16 16:20:14 +0000 UTC" firstStartedPulling="2026-02-16 16:20:15.821280967 +0000 UTC m=+5210.006258043" lastFinishedPulling="2026-02-16 16:20:28.574889065 +0000 UTC m=+5222.759866141" observedRunningTime="2026-02-16 16:20:30.303797987 +0000 UTC m=+5224.488775083" watchObservedRunningTime="2026-02-16 16:20:30.309507568 +0000 UTC m=+5224.494484634" Feb 16 16:20:32 crc kubenswrapper[4705]: E0216 16:20:32.422577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.824942 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.828889 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.891293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.891419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.998817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.999318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.999801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.051241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.152802 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.361959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerStarted","Data":"bd2ba4eaf5239f5cbfbeb7f9af95435ccb1822a1e30795a3129148f059a5aa63"} Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.420414 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:37 crc kubenswrapper[4705]: E0216 16:20:37.420895 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:40 crc kubenswrapper[4705]: E0216 16:20:40.424048 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:44 crc kubenswrapper[4705]: E0216 16:20:44.421609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:49 crc kubenswrapper[4705]: I0216 16:20:49.420636 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:49 crc kubenswrapper[4705]: E0216 16:20:49.421532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:51 crc kubenswrapper[4705]: I0216 16:20:51.544310 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerStarted","Data":"8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2"} Feb 16 16:20:51 crc kubenswrapper[4705]: I0216 16:20:51.579102 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" podStartSLOduration=2.050772156 podStartE2EDuration="15.579079725s" podCreationTimestamp="2026-02-16 16:20:36 +0000 UTC" firstStartedPulling="2026-02-16 16:20:37.217262534 +0000 UTC m=+5231.402239610" lastFinishedPulling="2026-02-16 16:20:50.745570103 +0000 UTC m=+5244.930547179" observedRunningTime="2026-02-16 16:20:51.564409501 +0000 UTC m=+5245.749386597" watchObservedRunningTime="2026-02-16 16:20:51.579079725 +0000 UTC m=+5245.764056801" Feb 16 16:20:55 crc kubenswrapper[4705]: E0216 16:20:55.422607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:58 crc kubenswrapper[4705]: E0216 16:20:58.426819 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:04 crc kubenswrapper[4705]: I0216 16:21:04.419654 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:04 crc kubenswrapper[4705]: E0216 16:21:04.420496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:08 crc kubenswrapper[4705]: I0216 16:21:08.781703 4705 generic.go:334] "Generic (PLEG): container finished" podID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerID="8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2" exitCode=0 Feb 16 16:21:08 crc kubenswrapper[4705]: I0216 16:21:08.781778 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerDied","Data":"8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2"} Feb 16 16:21:09 crc kubenswrapper[4705]: I0216 16:21:09.968340 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.015142 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.025903 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095404 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host" (OuterVolumeSpecName: "host") pod "89a1fadc-0734-4725-bd9d-61b8107bfb0a" (UID: "89a1fadc-0734-4725-bd9d-61b8107bfb0a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.096975 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.104580 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx" (OuterVolumeSpecName: "kube-api-access-558lx") pod "89a1fadc-0734-4725-bd9d-61b8107bfb0a" (UID: "89a1fadc-0734-4725-bd9d-61b8107bfb0a"). InnerVolumeSpecName "kube-api-access-558lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.201530 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:10 crc kubenswrapper[4705]: E0216 16:21:10.422481 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.433284 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" path="/var/lib/kubelet/pods/89a1fadc-0734-4725-bd9d-61b8107bfb0a/volumes" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.805128 4705 scope.go:117] "RemoveContainer" containerID="8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.805512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.282686 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:11 crc kubenswrapper[4705]: E0216 16:21:11.283679 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.283697 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.283993 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.285071 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.438226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.438571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.541259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.541312 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.542085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.566279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.611918 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.821460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" event={"ID":"8e71011d-2714-45d9-883a-ca78a022c8f2","Type":"ContainerStarted","Data":"64b31e7776f7a0e7c43a83239e07e1c36d867bbe7ee9959a4a732ac0b14ed45a"} Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.836179 4705 generic.go:334] "Generic (PLEG): container finished" podID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerID="91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc" exitCode=1 Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.836272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" event={"ID":"8e71011d-2714-45d9-883a-ca78a022c8f2","Type":"ContainerDied","Data":"91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc"} Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.875531 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.884643 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:13 crc kubenswrapper[4705]: E0216 16:21:13.422131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:13.999842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.110986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"8e71011d-2714-45d9-883a-ca78a022c8f2\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"8e71011d-2714-45d9-883a-ca78a022c8f2\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host" (OuterVolumeSpecName: "host") pod "8e71011d-2714-45d9-883a-ca78a022c8f2" (UID: "8e71011d-2714-45d9-883a-ca78a022c8f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111681 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.118602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95" (OuterVolumeSpecName: "kube-api-access-gmh95") pod "8e71011d-2714-45d9-883a-ca78a022c8f2" (UID: "8e71011d-2714-45d9-883a-ca78a022c8f2"). InnerVolumeSpecName "kube-api-access-gmh95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.215003 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.434215 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" path="/var/lib/kubelet/pods/8e71011d-2714-45d9-883a-ca78a022c8f2/volumes" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.864210 4705 scope.go:117] "RemoveContainer" containerID="91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.864503 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:17 crc kubenswrapper[4705]: I0216 16:21:17.420727 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:17 crc kubenswrapper[4705]: E0216 16:21:17.421609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:25 crc kubenswrapper[4705]: I0216 16:21:25.422413 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.535809 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.535890 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.537206 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.538485 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556190 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556591 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556763 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.557982 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:29 crc kubenswrapper[4705]: I0216 16:21:29.420787 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:29 crc kubenswrapper[4705]: E0216 16:21:29.421473 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:38 crc kubenswrapper[4705]: E0216 16:21:38.421721 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:38 crc kubenswrapper[4705]: E0216 16:21:38.421842 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:42 crc kubenswrapper[4705]: I0216 16:21:42.424791 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:42 crc kubenswrapper[4705]: E0216 16:21:42.425901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:50 crc kubenswrapper[4705]: E0216 16:21:50.429599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:50 crc kubenswrapper[4705]: E0216 16:21:50.440979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:55 crc kubenswrapper[4705]: I0216 16:21:55.420518 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:55 crc kubenswrapper[4705]: E0216 16:21:55.421411 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:02 crc kubenswrapper[4705]: E0216 16:22:02.421670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:05 crc kubenswrapper[4705]: E0216 16:22:05.423738 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:08 crc kubenswrapper[4705]: I0216 16:22:08.420139 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:08 crc kubenswrapper[4705]: E0216 16:22:08.421361 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:15 crc kubenswrapper[4705]: E0216 16:22:15.422502 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:19 crc kubenswrapper[4705]: E0216 16:22:19.421552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.013840 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-api/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.178336 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-evaluator/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.289306 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-listener/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.362707 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-notifier/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.400861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-675dd58676-vnqw2_ab2c420d-8288-48f7-b53e-f480bf6d5a7f/barbican-api/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.420811 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:21 crc kubenswrapper[4705]: E0216 16:22:21.421120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.493861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-675dd58676-vnqw2_ab2c420d-8288-48f7-b53e-f480bf6d5a7f/barbican-api-log/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.632666 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bf77f7566-frgcc_edea8308-f2c7-4f10-993c-974327a36727/barbican-keystone-listener/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.708471 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bf77f7566-frgcc_edea8308-f2c7-4f10-993c-974327a36727/barbican-keystone-listener-log/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.910052 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c59b585f-gvjjl_eff171da-ce4a-4c88-b7bd-b7b88e6ad322/barbican-worker/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.931057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c59b585f-gvjjl_eff171da-ce4a-4c88-b7bd-b7b88e6ad322/barbican-worker-log/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.097639 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t_ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.259262 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/ceilometer-notification-agent/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.364586 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/proxy-httpd/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.406800 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/sg-core/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.571924 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d09b351a-8da4-4f00-8847-f3461478179f/cinder-api/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.633101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d09b351a-8da4-4f00-8847-f3461478179f/cinder-api-log/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.865649 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c85708f6-f2cb-4248-94e9-7c7763e88275/probe/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.910128 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/init/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.971974 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c85708f6-f2cb-4248-94e9-7c7763e88275/cinder-scheduler/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.110010 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/dnsmasq-dns/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.123714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/init/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.214602 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49hkn_49d4643c-71ab-4c0f-b3cb-0f494971aa6e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.593737 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx_5c695fba-8bed-4549-98f9-b708893eab8e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.686021 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-drn5g_447b9ab7-d583-4e71-8eca-fb352e541b13/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.839209 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j_0b4f3354-7fb7-4031-9c17-270d82f9ece1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.024421 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mtzln_ca989d06-e6a2-47cc-abc9-17d4c2740830/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.091841 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6_896e8ac5-e84c-41d6-a6e5-638c9b5cae1c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.255695 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq_df22a5a3-55ac-4d51-99bb-c6624cd8ba8f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.378501 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ef0b445-ec9e-4c58-a7d3-59068664d3ca/glance-httpd/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.537876 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ef0b445-ec9e-4c58-a7d3-59068664d3ca/glance-log/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.611780 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_28ba576c-ee01-48ea-b78b-a2bea81b90a2/glance-log/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.671714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_28ba576c-ee01-48ea-b78b-a2bea81b90a2/glance-httpd/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.388162 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7b7bf99b56-hm6dc_ada71f46-f923-4974-9776-ed92f20c79b1/heat-engine/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.460854 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7986669c9b-q8ghv_08b1576e-92c8-407b-b821-e0cbfe1be11a/heat-api/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.532155 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-65b6d6849b-79456_94fb430a-807d-4e37-bc5a-9b4c75454427/heat-cfnapi/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.627524 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cd49d8b6b-6gdmx_57b8117e-e668-46a4-a652-8ac2b3e5d8ff/keystone-api/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.755307 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520961-75mxg_98bca645-7f96-4667-adb9-cf4c5002ba78/keystone-cron/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.808147 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_db5e423c-e590-4e7b-913a-a0a10d55537d/kube-state-metrics/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.122787 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_d40e4f3a-57bb-45e6-997b-39ffc0e497d9/mysqld-exporter/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.527505 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66f94f69bf-82g78_f7edca3b-82f6-4cfb-9781-664afa855ba8/neutron-api/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.653959 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66f94f69bf-82g78_f7edca3b-82f6-4cfb-9781-664afa855ba8/neutron-httpd/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.956317 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3f98b0f-bb45-4942-81e0-68e6f2658df5/nova-api-log/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.136929 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4d5bb097-aa56-4b02-942e-70b894afe84a/nova-cell0-conductor-conductor/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.314563 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3f98b0f-bb45-4942-81e0-68e6f2658df5/nova-api-api/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.372620 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d/nova-cell1-conductor-conductor/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.538015 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b49f6329-2396-4d3e-9b28-2dd3586b1965/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.682626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e121221e-aecf-4425-bb78-e384ce98e73b/nova-metadata-log/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.991831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e67e0dd7-af17-4240-ab5a-b6c149913841/nova-scheduler-scheduler/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.173339 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: E0216 16:22:28.428154 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.479752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/galera/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.513190 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.716765 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.989769 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/mysql-bootstrap/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.003569 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/galera/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.208237 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4881941b-eb71-45be-aa51-0e8431b29e89/openstackclient/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.306998 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-crbv8_4374b7db-8c42-42e1-b2bd-c633bdd8edfd/ovn-controller/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.538020 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jbdgd_17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772/openstack-network-exporter/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.717753 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e121221e-aecf-4425-bb78-e384ce98e73b/nova-metadata-metadata/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.739713 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server-init/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.074118 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server-init/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.149887 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovs-vswitchd/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.203209 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.337512 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1ca8a807-8e20-4d12-8355-09c1883163ca/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.386312 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1ca8a807-8e20-4d12-8355-09c1883163ca/ovn-northd/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.579076 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e54f9b0-7b03-46de-8c76-2a37e44a02df/ovsdbserver-nb/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.589964 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e54f9b0-7b03-46de-8c76-2a37e44a02df/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.777641 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_54e71500-a592-4c97-86c1-4f3f6a4d1b41/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.888149 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_54e71500-a592-4c97-86c1-4f3f6a4d1b41/ovsdbserver-sb/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.047093 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6599894f76-dcwz8_4122899e-95db-413a-ac71-f0574969753a/placement-api/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.076070 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6599894f76-dcwz8_4122899e-95db-413a-ac71-f0574969753a/placement-log/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.171921 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/init-config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.386860 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/prometheus/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.436651 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.507755 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/thanos-sidecar/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.539074 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/init-config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.692884 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.061804 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.128124 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.138598 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.303678 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.462376 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.517592 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.721845 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.795884 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.853547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/setup-container/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.064040 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/setup-container/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.076337 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/rabbitmq/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.136765 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7zg59_c73749fc-8501-405f-bd7e-de9fca2d968a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.380876 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7_9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: E0216 16:22:33.421240 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.587404 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b76884b7-g4c57_811fab8b-dbb5-4985-b67f-d3671ea6ff9b/proxy-server/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.631391 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b76884b7-g4c57_811fab8b-dbb5-4985-b67f-d3671ea6ff9b/proxy-httpd/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.707050 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bkfjd_f5297b85-4dcb-4e4d-8b11-fbba54b2b31d/swift-ring-rebalance/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.701647 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-auditor/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.810451 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-server/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.839950 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-replicator/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.848187 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-reaper/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.980517 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-auditor/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.071538 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-updater/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.087215 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-replicator/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.113287 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-server/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.287624 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-auditor/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.398162 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-expirer/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.415682 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-replicator/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.420248 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:35 crc kubenswrapper[4705]: E0216 16:22:35.420608 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.454491 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-server/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.557005 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-updater/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.668278 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/rsync/0.log" Feb 16 16:22:36 crc kubenswrapper[4705]: I0216 16:22:36.050949 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/swift-recon-cron/0.log" Feb 16 16:22:40 crc kubenswrapper[4705]: I0216 16:22:40.530226 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_db14762a-eebd-41a0-b107-e879fedc05f1/memcached/0.log" Feb 16 16:22:42 crc kubenswrapper[4705]: E0216 16:22:42.422363 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:46 crc kubenswrapper[4705]: I0216 16:22:46.431559 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:46 crc kubenswrapper[4705]: E0216 16:22:46.432271 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:47 crc kubenswrapper[4705]: E0216 16:22:47.421658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.138169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:52 crc kubenswrapper[4705]: E0216 16:22:52.139941 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.139959 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.140235 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.142195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.170786 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297367 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.298076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.321877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.469261 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:53 crc kubenswrapper[4705]: I0216 16:22:53.132921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049206 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" exitCode=0 Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475"} Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049688 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"5230e4898fc0f99178a19a89acba9e11354fc6b0463fa93c560ea2c9d29a6bde"} Feb 16 16:22:55 crc kubenswrapper[4705]: I0216 16:22:55.066768 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} Feb 16 16:22:55 crc kubenswrapper[4705]: E0216 16:22:55.423471 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.127943 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" exitCode=0 Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.128033 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.419413 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:00 crc kubenswrapper[4705]: E0216 16:23:00.420008 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:01 crc kubenswrapper[4705]: I0216 16:23:01.140478 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} Feb 16 16:23:01 crc kubenswrapper[4705]: I0216 16:23:01.185576 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rsd44" podStartSLOduration=2.715055014 podStartE2EDuration="9.185550138s" podCreationTimestamp="2026-02-16 16:22:52 +0000 UTC" firstStartedPulling="2026-02-16 16:22:54.05189074 +0000 UTC m=+5368.236867816" lastFinishedPulling="2026-02-16 16:23:00.522385864 +0000 UTC m=+5374.707362940" observedRunningTime="2026-02-16 16:23:01.163391302 +0000 UTC m=+5375.348368398" watchObservedRunningTime="2026-02-16 16:23:01.185550138 +0000 UTC m=+5375.370527234" Feb 16 16:23:02 crc kubenswrapper[4705]: E0216 16:23:02.421977 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:02 crc kubenswrapper[4705]: I0216 16:23:02.469667 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:02 crc kubenswrapper[4705]: I0216 16:23:02.469751 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:03 crc kubenswrapper[4705]: I0216 16:23:03.979968 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:03 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:03 crc kubenswrapper[4705]: > Feb 16 16:23:07 crc kubenswrapper[4705]: I0216 16:23:07.989965 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.283901 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.286091 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.317983 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.502377 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/extract/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.512980 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.521669 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.031202 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-fsx2w_f0b4e27c-91ff-4540-bfff-e6c30849c75f/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.419029 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-xdlbv_59e2a9a8-5a0d-4772-8d9c-b755fcd234be/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: E0216 16:23:09.421479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.778590 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-f4fgx_5ee1a78f-cea6-443b-9b43-9ed2334c5c9e/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.884522 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-q5n45_f1a4206b-818d-49e7-9177-9dc7373ded1c/manager/0.log" Feb 16 16:23:10 crc kubenswrapper[4705]: I0216 16:23:10.459882 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ftdcn_a6d65371-bf15-42b9-857d-c4c7350aa402/manager/0.log" Feb 16 16:23:10 crc kubenswrapper[4705]: I0216 16:23:10.782846 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-xg4dw_9bd1689a-ae93-4ac0-ab21-c899756ef07a/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.135255 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8lztr_34eadd57-e91b-4324-93c0-ede339012ab3/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.318962 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-dnbpd_f06e9156-0c7b-41f6-a1cf-83820a7e7732/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.419458 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:11 crc kubenswrapper[4705]: E0216 16:23:11.419813 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.570361 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-kh759_e73efbc6-26db-4760-b745-3c93c9b2329e/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.883194 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-2vvm8_9f0ad3cb-ac80-4462-bd97-b09f9367dc54/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.909172 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-s9vdm_84edc365-fa2c-40bc-ae0e-b71ae094ab27/manager/0.log" Feb 16 16:23:12 crc kubenswrapper[4705]: I0216 16:23:12.240206 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-b6587_8279d837-6ad4-4e2b-a03a-eb0a24a30998/manager/0.log" Feb 16 16:23:12 crc kubenswrapper[4705]: I0216 16:23:12.384554 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq_1872b592-a1cc-445a-b75f-f658612dc160/manager/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.037779 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-787c798d66-r8xk2_a8b2ba76-e9d9-404f-9859-22c40c63f1fb/operator/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.115748 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rtf6z_050e9b74-0e40-4a1a-8cb8-1ee038752bb6/registry-server/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.461145 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-hw64s_d4a1c432-7691-472b-80af-caaa6afcacb2/manager/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.532476 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:13 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:13 crc kubenswrapper[4705]: > Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.755997 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-vkmgq_794d8603-8fa6-4068-8a38-e0825d42ae3f/manager/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.031576 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5s9ck_d67e5221-5cd4-4659-a41b-5d470f435c3e/operator/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.276546 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6c6fr_ca67e7ec-20a9-4768-ae37-3aa90f721201/manager/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.815891 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-bk9rm_c66cb2ee-a6d3-454b-a2ea-a160038b76f6/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.221917 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6ccb9b958b-qbt7j_8d4c4ad7-542f-4d25-a444-7b4752e43f89/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.311829 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b45b684f5-zrvmj_07891331-9fdb-4922-aea1-6a3acf7f656f/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: E0216 16:23:15.421944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.674187 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-77d2l_d583ac10-9ad2-4f95-9787-74f2cb28c943/manager/0.log" Feb 16 16:23:16 crc kubenswrapper[4705]: I0216 16:23:16.068543 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-zk57l_7373be90-eefb-4c2b-bdbd-a312daef2434/manager/0.log" Feb 16 16:23:23 crc kubenswrapper[4705]: I0216 16:23:23.420469 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-f52r7_1b9942d1-9e1e-436b-8a58-e37d6b55a00b/manager/0.log" Feb 16 16:23:23 crc kubenswrapper[4705]: I0216 16:23:23.524151 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:23 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:23 crc kubenswrapper[4705]: > Feb 16 16:23:24 crc kubenswrapper[4705]: E0216 16:23:24.422658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:25 crc kubenswrapper[4705]: I0216 16:23:25.419898 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:25 crc kubenswrapper[4705]: E0216 16:23:25.420974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:29 crc kubenswrapper[4705]: E0216 16:23:29.423230 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.521579 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.575138 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.769952 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:34 crc kubenswrapper[4705]: I0216 16:23:34.509638 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" containerID="cri-o://1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" gracePeriod=2 Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.159139 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.240295 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.240755 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.241040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.251898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities" (OuterVolumeSpecName: "utilities") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.253850 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2" (OuterVolumeSpecName: "kube-api-access-4dfl2") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "kube-api-access-4dfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.344347 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.344410 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.361530 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.446361 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522287 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" exitCode=0 Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522391 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"5230e4898fc0f99178a19a89acba9e11354fc6b0463fa93c560ea2c9d29a6bde"} Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522397 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522411 4705 scope.go:117] "RemoveContainer" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.557345 4705 scope.go:117] "RemoveContainer" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.575106 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.588088 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.601400 4705 scope.go:117] "RemoveContainer" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.641537 4705 scope.go:117] "RemoveContainer" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.641994 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": container with ID starting with 1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81 not found: ID does not exist" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642025 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} err="failed to get container status \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": rpc error: code = NotFound desc = could not find container \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": container with ID starting with 1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81 not found: ID does not exist" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642047 4705 scope.go:117] "RemoveContainer" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.642388 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": container with ID starting with 46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f not found: ID does not exist" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642473 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} err="failed to get container status \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": rpc error: code = NotFound desc = could not find container \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": container with ID starting with 46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f not found: ID does not exist" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642549 4705 scope.go:117] "RemoveContainer" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.643174 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": container with ID starting with 06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475 not found: ID does not exist" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.643196 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475"} err="failed to get container status \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": rpc error: code = NotFound desc = could not find container \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": container with ID starting with 06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475 not found: ID does not exist" Feb 16 16:23:36 crc kubenswrapper[4705]: I0216 16:23:36.433409 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" path="/var/lib/kubelet/pods/fdc07fe9-1299-4e6c-8178-a7c42b022c7c/volumes" Feb 16 16:23:37 crc kubenswrapper[4705]: I0216 16:23:37.419546 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:37 crc kubenswrapper[4705]: E0216 16:23:37.420211 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:37 crc kubenswrapper[4705]: E0216 16:23:37.421158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.717260 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kqpk2_0b436476-c64b-40ca-a644-1067ccefcecc/control-plane-machine-set-operator/0.log" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.872940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tzm67_b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea/machine-api-operator/0.log" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.888663 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tzm67_b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea/kube-rbac-proxy/0.log" Feb 16 16:23:43 crc kubenswrapper[4705]: E0216 16:23:43.422186 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:49 crc kubenswrapper[4705]: I0216 16:23:49.419529 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:49 crc kubenswrapper[4705]: E0216 16:23:49.420317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:51 crc kubenswrapper[4705]: E0216 16:23:51.423098 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:56 crc kubenswrapper[4705]: E0216 16:23:56.432191 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:56 crc kubenswrapper[4705]: I0216 16:23:56.959729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-txcpz_ca614a32-6a4c-4802-8cb5-a927aac7a59a/cert-manager-cainjector/0.log" Feb 16 16:23:57 crc kubenswrapper[4705]: I0216 16:23:57.045101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-46spv_b6695119-142b-40cb-bdd8-e0e1f55e0e61/cert-manager-controller/0.log" Feb 16 16:23:57 crc kubenswrapper[4705]: I0216 16:23:57.185862 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mdqgz_fc1f84cc-974e-42c8-8b49-120dfe74aa0f/cert-manager-webhook/0.log" Feb 16 16:24:01 crc kubenswrapper[4705]: I0216 16:24:01.429306 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:01 crc kubenswrapper[4705]: E0216 16:24:01.430312 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:06 crc kubenswrapper[4705]: E0216 16:24:06.429922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:07 crc kubenswrapper[4705]: E0216 16:24:07.422312 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.564034 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-hl5c9_303c8298-3e10-49e8-96b1-ed1dafcd23e3/nmstate-console-plugin/0.log" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.839132 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wr89v_9ffb9d03-b8ea-44ff-9397-58b55c367d89/nmstate-handler/0.log" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.975021 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tnbq4_ed67458f-1875-405e-85a5-2a4f7d54089b/kube-rbac-proxy/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.053626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tnbq4_ed67458f-1875-405e-85a5-2a4f7d54089b/nmstate-metrics/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.142735 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-h6nzt_b2d83f82-a3e4-4937-8484-5f8174b5d986/nmstate-operator/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.254412 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-9kf74_7a87077c-c5fa-4c92-9c08-44dcf11d38c7/nmstate-webhook/0.log" Feb 16 16:24:15 crc kubenswrapper[4705]: I0216 16:24:15.419903 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:15 crc kubenswrapper[4705]: E0216 16:24:15.420912 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:17 crc kubenswrapper[4705]: E0216 16:24:17.422765 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:22 crc kubenswrapper[4705]: E0216 16:24:22.423133 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:26 crc kubenswrapper[4705]: I0216 16:24:26.428361 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:26 crc kubenswrapper[4705]: E0216 16:24:26.429505 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:28 crc kubenswrapper[4705]: E0216 16:24:28.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:28 crc kubenswrapper[4705]: I0216 16:24:28.818087 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/kube-rbac-proxy/0.log" Feb 16 16:24:28 crc kubenswrapper[4705]: I0216 16:24:28.863642 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/manager/0.log" Feb 16 16:24:33 crc kubenswrapper[4705]: E0216 16:24:33.424632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:40 crc kubenswrapper[4705]: I0216 16:24:40.419811 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:41 crc kubenswrapper[4705]: I0216 16:24:41.271282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} Feb 16 16:24:42 crc kubenswrapper[4705]: E0216 16:24:42.422984 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.404725 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-f8kwg_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb/prometheus-operator/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.594773 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_81328a1c-32d6-4ce6-9139-8418d2e8fa52/prometheus-operator-admission-webhook/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.622008 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_b90dedac-68bb-409d-9860-af59c6c7d172/prometheus-operator-admission-webhook/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.813039 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l2rxp_5510c272-cd32-4850-a9fa-daff2e045b92/operator/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.885417 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-9hcns_72697fcc-cd94-4ba9-9479-cb5bd82d83ab/observability-ui-dashboards/0.log" Feb 16 16:24:44 crc kubenswrapper[4705]: I0216 16:24:44.029326 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tqj56_8acc36de-d26d-44cd-bad6-d31f0a4a4520/perses-operator/0.log" Feb 16 16:24:44 crc kubenswrapper[4705]: E0216 16:24:44.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:54 crc kubenswrapper[4705]: E0216 16:24:54.425757 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:59 crc kubenswrapper[4705]: E0216 16:24:59.422599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:59 crc kubenswrapper[4705]: I0216 16:24:59.942500 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-9x6cn_0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9/cluster-logging-operator/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.112437 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-rv6rf_48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9/collector/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.189069 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_cd14a989-22ac-46cb-9295-a99e2043542b/loki-compactor/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.530615 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-s8kg2_feb0e04c-e741-4dbe-8c09-94379b736809/loki-distributor/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.587092 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-mzgch_a85ad7e0-59d0-412d-96e1-298020ef9927/opa/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.606206 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-mzgch_a85ad7e0-59d0-412d-96e1-298020ef9927/gateway/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.800503 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-zxt7t_d1223933-4ce9-41dd-9c8a-14a59b540e20/gateway/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.848141 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-zxt7t_d1223933-4ce9-41dd-9c8a-14a59b540e20/opa/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.988928 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_4cde3c29-9511-489b-9849-468cae07d312/loki-index-gateway/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.091153 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf/loki-ingester/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.265476 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-rbcrd_dd10ec10-e122-430f-afaf-b0b8222a6b15/loki-querier/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.342457 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-mbwk8_8e2f02fa-7b78-49ef-8c1a-f9cf7387e063/loki-query-frontend/0.log" Feb 16 16:25:08 crc kubenswrapper[4705]: E0216 16:25:08.430611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:14 crc kubenswrapper[4705]: E0216 16:25:14.422088 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.529809 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530706 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-utilities" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530723 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-utilities" Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530757 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530766 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-content" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530802 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-content" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.531081 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.535031 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.553788 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.564839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.565312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.565540 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.670709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.670933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671124 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671200 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671525 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.701877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.892914 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:16 crc kubenswrapper[4705]: I0216 16:25:16.532068 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.713853 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" exitCode=0 Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.714088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408"} Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.714116 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"217b096c44622a46ad4ed6734a3e3730e80590af979a4af721540c8228924fb7"} Feb 16 16:25:19 crc kubenswrapper[4705]: E0216 16:25:19.422079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.475729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5p2db_493ad03c-5e3e-4726-9764-272f39f5aa37/kube-rbac-proxy/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.714469 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5p2db_493ad03c-5e3e-4726-9764-272f39f5aa37/controller/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.737896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.741230 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.974075 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.003247 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.023667 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.046421 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.202902 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.246837 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.292646 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.300931 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.520211 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.525711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.560490 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.595989 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/controller/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.713059 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/frr-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.789508 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/kube-rbac-proxy/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.910990 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/kube-rbac-proxy-frr/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.330860 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/reloader/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.454315 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-x4255_751baaae-9090-48b1-9bae-79b7527d6c02/frr-k8s-webhook-server/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.759442 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" exitCode=0 Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.759505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.904148 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76745d596b-4dznb_55ce7b61-e1e6-483d-a84f-7ea168ef9672/manager/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.120940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75967976b4-q84hp_624f7ca8-2011-4ed6-9ee2-24acddf29390/webhook-server/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.195202 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nbgmf_2536f291-dea1-4673-acf7-9beaffa87817/kube-rbac-proxy/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.304605 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/frr/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.778101 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.831068 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4fk2w" podStartSLOduration=3.081790078 podStartE2EDuration="7.831039705s" podCreationTimestamp="2026-02-16 16:25:15 +0000 UTC" firstStartedPulling="2026-02-16 16:25:17.716015825 +0000 UTC m=+5511.900992901" lastFinishedPulling="2026-02-16 16:25:22.465265452 +0000 UTC m=+5516.650242528" observedRunningTime="2026-02-16 16:25:22.815309271 +0000 UTC m=+5517.000286357" watchObservedRunningTime="2026-02-16 16:25:22.831039705 +0000 UTC m=+5517.016016781" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.972520 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nbgmf_2536f291-dea1-4673-acf7-9beaffa87817/speaker/0.log" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.893570 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.894167 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.949104 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:29 crc kubenswrapper[4705]: E0216 16:25:29.422970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:33 crc kubenswrapper[4705]: E0216 16:25:33.424328 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:35 crc kubenswrapper[4705]: I0216 16:25:35.972351 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.033086 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.900565 4705 scope.go:117] "RemoveContainer" containerID="f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef" Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.961252 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4fk2w" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" containerID="cri-o://afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" gracePeriod=2 Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.346331 4705 scope.go:117] "RemoveContainer" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.853932 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.856940 4705 scope.go:117] "RemoveContainer" containerID="21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.989570 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" exitCode=0 Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.989729 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991139 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"217b096c44622a46ad4ed6734a3e3730e80590af979a4af721540c8228924fb7"} Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991172 4705 scope.go:117] "RemoveContainer" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001724 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.003269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities" (OuterVolumeSpecName: "utilities") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.013325 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5" (OuterVolumeSpecName: "kube-api-access-xd4c5") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "kube-api-access-xd4c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.022706 4705 scope.go:117] "RemoveContainer" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.061127 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.079542 4705 scope.go:117] "RemoveContainer" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105601 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105649 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105665 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.110279 4705 scope.go:117] "RemoveContainer" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.111015 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": container with ID starting with afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f not found: ID does not exist" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111063 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} err="failed to get container status \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": rpc error: code = NotFound desc = could not find container \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": container with ID starting with afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111093 4705 scope.go:117] "RemoveContainer" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.111607 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": container with ID starting with cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864 not found: ID does not exist" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111650 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} err="failed to get container status \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": rpc error: code = NotFound desc = could not find container \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": container with ID starting with cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864 not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111685 4705 scope.go:117] "RemoveContainer" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.112141 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": container with ID starting with 993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408 not found: ID does not exist" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.112172 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408"} err="failed to get container status \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": rpc error: code = NotFound desc = could not find container \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": container with ID starting with 993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408 not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.322934 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.337958 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.436077 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" path="/var/lib/kubelet/pods/52d06b15-705b-47a8-8a15-7f41452d5007/volumes" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.549050 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.730776 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.780096 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.784821 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.908711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.932197 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.025439 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/extract/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.197983 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.408974 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: E0216 16:25:40.422367 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.459152 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.462623 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.640212 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/extract/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.654292 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.686284 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.870084 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.008631 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.033481 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.050768 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.252903 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.299345 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.320864 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/extract/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.517566 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.676606 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.697484 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.697621 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.915964 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.935619 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.179862 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.439664 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.452903 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.462815 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.716319 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/registry-server/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.786731 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.843718 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.138449 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/registry-server/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.376176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.533418 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.533525 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.548632 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.746558 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.753714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.782032 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/extract/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.788049 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.995752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.999291 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.003831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.261812 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.280200 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/extract/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.337257 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ghmpd_88197577-5157-4d99-9813-eb3173530b4f/marketplace-operator/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.337925 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.513945 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.752729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.765343 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.787999 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.928256 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.946780 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.037054 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.239969 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/registry-server/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.250831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.270764 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.313058 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.494362 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.519529 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:46 crc kubenswrapper[4705]: I0216 16:25:46.274094 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/registry-server/0.log" Feb 16 16:25:48 crc kubenswrapper[4705]: E0216 16:25:48.427727 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:54 crc kubenswrapper[4705]: E0216 16:25:54.422016 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.464728 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-f8kwg_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb/prometheus-operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.534480 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_b90dedac-68bb-409d-9860-af59c6c7d172/prometheus-operator-admission-webhook/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.539952 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_81328a1c-32d6-4ce6-9139-8418d2e8fa52/prometheus-operator-admission-webhook/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.765886 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l2rxp_5510c272-cd32-4850-a9fa-daff2e045b92/operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.836194 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tqj56_8acc36de-d26d-44cd-bad6-d31f0a4a4520/perses-operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.846381 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-9hcns_72697fcc-cd94-4ba9-9479-cb5bd82d83ab/observability-ui-dashboards/0.log" Feb 16 16:26:03 crc kubenswrapper[4705]: E0216 16:26:03.443583 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:07 crc kubenswrapper[4705]: E0216 16:26:07.421828 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:17 crc kubenswrapper[4705]: E0216 16:26:17.422618 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:18 crc kubenswrapper[4705]: I0216 16:26:18.096305 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/kube-rbac-proxy/0.log" Feb 16 16:26:18 crc kubenswrapper[4705]: I0216 16:26:18.145807 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/manager/0.log" Feb 16 16:26:20 crc kubenswrapper[4705]: E0216 16:26:20.423703 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:29 crc kubenswrapper[4705]: I0216 16:26:29.421190 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552684 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552767 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552917 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.554083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.543723 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.544275 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.544460 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.545733 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:36 crc kubenswrapper[4705]: I0216 16:26:36.796351 4705 trace.go:236] Trace[1045464604]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (16-Feb-2026 16:26:35.772) (total time: 1024ms): Feb 16 16:26:36 crc kubenswrapper[4705]: Trace[1045464604]: [1.024282416s] [1.024282416s] END Feb 16 16:26:40 crc kubenswrapper[4705]: E0216 16:26:40.421334 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:45 crc kubenswrapper[4705]: E0216 16:26:45.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:55 crc kubenswrapper[4705]: E0216 16:26:55.422435 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:00 crc kubenswrapper[4705]: E0216 16:27:00.424420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:01 crc kubenswrapper[4705]: I0216 16:27:01.684088 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:27:01 crc kubenswrapper[4705]: I0216 16:27:01.684542 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:27:10 crc kubenswrapper[4705]: E0216 16:27:10.421071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:11 crc kubenswrapper[4705]: E0216 16:27:11.433541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:22 crc kubenswrapper[4705]: E0216 16:27:22.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:24 crc kubenswrapper[4705]: E0216 16:27:24.421360 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:31 crc kubenswrapper[4705]: I0216 16:27:31.684018 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:27:31 crc kubenswrapper[4705]: I0216 16:27:31.684464 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:27:35 crc kubenswrapper[4705]: E0216 16:27:35.421739 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:39 crc kubenswrapper[4705]: E0216 16:27:39.422093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:50 crc kubenswrapper[4705]: E0216 16:27:50.423093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:50 crc kubenswrapper[4705]: E0216 16:27:50.423222 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.684002 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.684961 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685282 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685826 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685889 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" gracePeriod=600 Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683055 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" exitCode=0 Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683535 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:28:04 crc kubenswrapper[4705]: E0216 16:28:04.422695 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:05 crc kubenswrapper[4705]: E0216 16:28:05.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:16 crc kubenswrapper[4705]: E0216 16:28:16.429724 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.869579 4705 generic.go:334] "Generic (PLEG): container finished" podID="b3941987-2937-407a-a067-3f3af600f1f0" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" exitCode=0 Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.869666 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerDied","Data":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.870883 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.987405 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/gather/0.log" Feb 16 16:28:18 crc kubenswrapper[4705]: E0216 16:28:18.422056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.167966 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.168877 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" containerID="cri-o://8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" gracePeriod=2 Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.182728 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.791872 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/copy/0.log" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.792910 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.845591 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"b3941987-2937-407a-a067-3f3af600f1f0\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.845983 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"b3941987-2937-407a-a067-3f3af600f1f0\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.853033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn" (OuterVolumeSpecName: "kube-api-access-9mmdn") pod "b3941987-2937-407a-a067-3f3af600f1f0" (UID: "b3941987-2937-407a-a067-3f3af600f1f0"). InnerVolumeSpecName "kube-api-access-9mmdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.948793 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") on node \"crc\" DevicePath \"\"" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.967914 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/copy/0.log" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968300 4705 generic.go:334] "Generic (PLEG): container finished" podID="b3941987-2937-407a-a067-3f3af600f1f0" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" exitCode=143 Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968342 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968354 4705 scope.go:117] "RemoveContainer" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.990887 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.025407 4705 scope.go:117] "RemoveContainer" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:28 crc kubenswrapper[4705]: E0216 16:28:28.026067 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": container with ID starting with 8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd not found: ID does not exist" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026129 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd"} err="failed to get container status \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": rpc error: code = NotFound desc = could not find container \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": container with ID starting with 8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd not found: ID does not exist" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026170 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: E0216 16:28:28.026712 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": container with ID starting with f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97 not found: ID does not exist" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026737 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} err="failed to get container status \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": rpc error: code = NotFound desc = could not find container \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": container with ID starting with f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97 not found: ID does not exist" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.035555 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b3941987-2937-407a-a067-3f3af600f1f0" (UID: "b3941987-2937-407a-a067-3f3af600f1f0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.052267 4705 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.435196 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3941987-2937-407a-a067-3f3af600f1f0" path="/var/lib/kubelet/pods/b3941987-2937-407a-a067-3f3af600f1f0/volumes" Feb 16 16:28:29 crc kubenswrapper[4705]: E0216 16:28:29.421737 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:30 crc kubenswrapper[4705]: E0216 16:28:30.429775 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:40 crc kubenswrapper[4705]: E0216 16:28:40.425999 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:43 crc kubenswrapper[4705]: E0216 16:28:43.422199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:53 crc kubenswrapper[4705]: E0216 16:28:53.422442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.084896 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085695 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085710 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085742 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-utilities" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085748 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-utilities" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085770 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085776 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085795 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085801 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085814 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-content" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-content" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086026 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086042 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086061 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.088819 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.112145 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235426 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235666 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338768 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.339330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.339517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.361406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.417549 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.006639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.260080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4"} Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.260132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b"} Feb 16 16:28:56 crc kubenswrapper[4705]: I0216 16:28:56.273963 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4" exitCode=0 Feb 16 16:28:56 crc kubenswrapper[4705]: I0216 16:28:56.274037 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4"} Feb 16 16:28:56 crc kubenswrapper[4705]: E0216 16:28:56.427986 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:57 crc kubenswrapper[4705]: I0216 16:28:57.288470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4"} Feb 16 16:28:58 crc kubenswrapper[4705]: I0216 16:28:58.300806 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4" exitCode=0 Feb 16 16:28:58 crc kubenswrapper[4705]: I0216 16:28:58.300911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4"} Feb 16 16:29:00 crc kubenswrapper[4705]: I0216 16:29:00.348360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c"} Feb 16 16:29:00 crc kubenswrapper[4705]: I0216 16:29:00.386396 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kkscm" podStartSLOduration=3.936724065 podStartE2EDuration="6.386356797s" podCreationTimestamp="2026-02-16 16:28:54 +0000 UTC" firstStartedPulling="2026-02-16 16:28:56.27710176 +0000 UTC m=+5730.462078836" lastFinishedPulling="2026-02-16 16:28:58.726734492 +0000 UTC m=+5732.911711568" observedRunningTime="2026-02-16 16:29:00.369938383 +0000 UTC m=+5734.554915459" watchObservedRunningTime="2026-02-16 16:29:00.386356797 +0000 UTC m=+5734.571333873" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.418083 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.418820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.479618 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:05 crc kubenswrapper[4705]: I0216 16:29:05.481327 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:05 crc kubenswrapper[4705]: I0216 16:29:05.547029 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:06 crc kubenswrapper[4705]: E0216 16:29:06.435464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:07 crc kubenswrapper[4705]: I0216 16:29:07.429287 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kkscm" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" containerID="cri-o://5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c" gracePeriod=2 Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.450511 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c" exitCode=0 Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.450568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c"} Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.451084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b"} Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.451101 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.496050 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.648997 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.649144 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.649186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.651112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities" (OuterVolumeSpecName: "utilities") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.657307 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx" (OuterVolumeSpecName: "kube-api-access-9cddx") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "kube-api-access-9cddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.710278 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753069 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753114 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753126 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.464010 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.538202 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.550787 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.435041 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b529f129-e471-43ba-a45a-abad696e8aef" path="/var/lib/kubelet/pods/b529f129-e471-43ba-a45a-abad696e8aef/volumes" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.755628 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756438 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-utilities" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-utilities" Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756482 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756491 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756515 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-content" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756524 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-content" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756852 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.759603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.780296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.819828 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.819912 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.820094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.924417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.924518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.948979 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:11 crc kubenswrapper[4705]: I0216 16:29:11.096027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:11 crc kubenswrapper[4705]: E0216 16:29:11.424719 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:11 crc kubenswrapper[4705]: W0216 16:29:11.667440 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f0330f_6e72_46a8_a663_593543de6aee.slice/crio-9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f WatchSource:0}: Error finding container 9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f: Status 404 returned error can't find the container with id 9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f Feb 16 16:29:11 crc kubenswrapper[4705]: I0216 16:29:11.669726 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.501964 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" exitCode=0 Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.502657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788"} Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.502900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f"} Feb 16 16:29:13 crc kubenswrapper[4705]: I0216 16:29:13.516410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} Feb 16 16:29:14 crc kubenswrapper[4705]: I0216 16:29:14.547251 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" exitCode=0 Feb 16 16:29:14 crc kubenswrapper[4705]: I0216 16:29:14.547678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} Feb 16 16:29:16 crc kubenswrapper[4705]: I0216 16:29:16.580562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} Feb 16 16:29:16 crc kubenswrapper[4705]: I0216 16:29:16.612689 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lw65w" podStartSLOduration=4.123868796 podStartE2EDuration="6.612633264s" podCreationTimestamp="2026-02-16 16:29:10 +0000 UTC" firstStartedPulling="2026-02-16 16:29:12.504386275 +0000 UTC m=+5746.689363351" lastFinishedPulling="2026-02-16 16:29:14.993150743 +0000 UTC m=+5749.178127819" observedRunningTime="2026-02-16 16:29:16.599501933 +0000 UTC m=+5750.784479009" watchObservedRunningTime="2026-02-16 16:29:16.612633264 +0000 UTC m=+5750.797610380" Feb 16 16:29:20 crc kubenswrapper[4705]: E0216 16:29:20.427527 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.096761 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.096809 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.177967 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.689576 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.747997 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:23 crc kubenswrapper[4705]: I0216 16:29:23.656034 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lw65w" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" containerID="cri-o://8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" gracePeriod=2 Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.366032 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.485614 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.485937 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.486050 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.487254 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities" (OuterVolumeSpecName: "utilities") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.499724 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth" (OuterVolumeSpecName: "kube-api-access-jdqth") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "kube-api-access-jdqth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.523685 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589211 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589579 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589685 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678256 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" exitCode=0 Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678348 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f"} Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678382 4705 scope.go:117] "RemoveContainer" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678418 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.724482 4705 scope.go:117] "RemoveContainer" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.736477 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.747512 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.751446 4705 scope.go:117] "RemoveContainer" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.799426 4705 scope.go:117] "RemoveContainer" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.800388 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": container with ID starting with 8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246 not found: ID does not exist" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.800443 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} err="failed to get container status \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": rpc error: code = NotFound desc = could not find container \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": container with ID starting with 8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246 not found: ID does not exist" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.800472 4705 scope.go:117] "RemoveContainer" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.801522 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": container with ID starting with 935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6 not found: ID does not exist" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801569 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} err="failed to get container status \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": rpc error: code = NotFound desc = could not find container \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": container with ID starting with 935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6 not found: ID does not exist" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801620 4705 scope.go:117] "RemoveContainer" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.801901 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": container with ID starting with 7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788 not found: ID does not exist" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801932 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788"} err="failed to get container status \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": rpc error: code = NotFound desc = could not find container \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": container with ID starting with 7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788 not found: ID does not exist" Feb 16 16:29:25 crc kubenswrapper[4705]: E0216 16:29:25.421138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:26 crc kubenswrapper[4705]: I0216 16:29:26.437179 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f0330f-6e72-46a8-a663-593543de6aee" path="/var/lib/kubelet/pods/31f0330f-6e72-46a8-a663-593543de6aee/volumes" Feb 16 16:29:33 crc kubenswrapper[4705]: E0216 16:29:33.421600 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:40 crc kubenswrapper[4705]: E0216 16:29:40.423001 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:46 crc kubenswrapper[4705]: E0216 16:29:46.438585 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:51 crc kubenswrapper[4705]: E0216 16:29:51.422821 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.170530 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172267 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-content" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172291 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-content" Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172315 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-utilities" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172323 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-utilities" Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172410 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172423 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172787 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.174211 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.176627 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.178357 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.200496 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.359020 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.359604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.360687 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463462 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.464714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.480238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.481095 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.513904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.984229 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:01 crc kubenswrapper[4705]: I0216 16:30:01.125129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerStarted","Data":"55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da"} Feb 16 16:30:01 crc kubenswrapper[4705]: E0216 16:30:01.421467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:02 crc kubenswrapper[4705]: I0216 16:30:02.140837 4705 generic.go:334] "Generic (PLEG): container finished" podID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerID="ebe76d7fb2dfcd6bac19a4d7c3d30e97b8f28e75a83763fdd5cf18cc5cda7b9b" exitCode=0 Feb 16 16:30:02 crc kubenswrapper[4705]: I0216 16:30:02.140930 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerDied","Data":"ebe76d7fb2dfcd6bac19a4d7c3d30e97b8f28e75a83763fdd5cf18cc5cda7b9b"} Feb 16 16:30:02 crc kubenswrapper[4705]: E0216 16:30:02.421420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.570342 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.663932 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664707 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume" (OuterVolumeSpecName: "config-volume") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.665779 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.676193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.678202 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq" (OuterVolumeSpecName: "kube-api-access-qk8zq") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "kube-api-access-qk8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.768715 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.768931 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164435 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerDied","Data":"55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da"} Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164480 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164517 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.666106 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.676821 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 16:30:06 crc kubenswrapper[4705]: I0216 16:30:06.435754 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" path="/var/lib/kubelet/pods/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd/volumes" Feb 16 16:30:13 crc kubenswrapper[4705]: E0216 16:30:13.421917 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:16 crc kubenswrapper[4705]: E0216 16:30:16.429541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:25 crc kubenswrapper[4705]: E0216 16:30:25.421720 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:30 crc kubenswrapper[4705]: E0216 16:30:30.422231 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:31 crc kubenswrapper[4705]: I0216 16:30:31.684630 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:30:31 crc kubenswrapper[4705]: I0216 16:30:31.684910 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:30:37 crc kubenswrapper[4705]: E0216 16:30:37.421851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:38 crc kubenswrapper[4705]: I0216 16:30:38.188650 4705 scope.go:117] "RemoveContainer" containerID="c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408" Feb 16 16:30:44 crc kubenswrapper[4705]: E0216 16:30:44.423215 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:51 crc kubenswrapper[4705]: E0216 16:30:51.423747 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:58 crc kubenswrapper[4705]: E0216 16:30:58.423179 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:01 crc kubenswrapper[4705]: I0216 16:31:01.684459 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:31:01 crc kubenswrapper[4705]: I0216 16:31:01.685250 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:31:02 crc kubenswrapper[4705]: E0216 16:31:02.425159 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:11 crc kubenswrapper[4705]: E0216 16:31:11.423345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:14 crc kubenswrapper[4705]: E0216 16:31:14.421466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:26 crc kubenswrapper[4705]: E0216 16:31:26.431892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:29 crc kubenswrapper[4705]: E0216 16:31:29.424445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.683955 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.685232 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.685355 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.686461 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.686613 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" gracePeriod=600 Feb 16 16:31:31 crc kubenswrapper[4705]: E0216 16:31:31.813337 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266089 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" exitCode=0 Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266140 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266180 4705 scope.go:117] "RemoveContainer" containerID="33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.267281 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:31:32 crc kubenswrapper[4705]: E0216 16:31:32.267758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:37 crc kubenswrapper[4705]: I0216 16:31:37.423018 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.507448 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.507834 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.508009 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.509251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.556927 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.557627 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.557783 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.559004 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:46 crc kubenswrapper[4705]: I0216 16:31:46.426592 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:31:46 crc kubenswrapper[4705]: E0216 16:31:46.427229 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:48 crc kubenswrapper[4705]: E0216 16:31:48.424350 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:59 crc kubenswrapper[4705]: E0216 16:31:59.426047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:59 crc kubenswrapper[4705]: E0216 16:31:59.426177 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:00 crc kubenswrapper[4705]: I0216 16:32:00.420967 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:00 crc kubenswrapper[4705]: E0216 16:32:00.421655 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:10 crc kubenswrapper[4705]: E0216 16:32:10.422156 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:13 crc kubenswrapper[4705]: I0216 16:32:13.420886 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:13 crc kubenswrapper[4705]: E0216 16:32:13.422047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:13 crc kubenswrapper[4705]: E0216 16:32:13.422382 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:22 crc kubenswrapper[4705]: E0216 16:32:22.422708 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:26 crc kubenswrapper[4705]: E0216 16:32:26.433729 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:28 crc kubenswrapper[4705]: I0216 16:32:28.420412 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:28 crc kubenswrapper[4705]: E0216 16:32:28.421215 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:36 crc kubenswrapper[4705]: E0216 16:32:36.432446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:40 crc kubenswrapper[4705]: E0216 16:32:40.423438 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:42 crc kubenswrapper[4705]: I0216 16:32:42.419442 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:42 crc kubenswrapper[4705]: E0216 16:32:42.420235 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:49 crc kubenswrapper[4705]: E0216 16:32:49.426948 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:52 crc kubenswrapper[4705]: E0216 16:32:52.423855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:54 crc kubenswrapper[4705]: I0216 16:32:54.419551 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:54 crc kubenswrapper[4705]: E0216 16:32:54.420285 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:02 crc kubenswrapper[4705]: E0216 16:33:02.423469 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:05 crc kubenswrapper[4705]: I0216 16:33:05.420278 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:05 crc kubenswrapper[4705]: E0216 16:33:05.421197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:05 crc kubenswrapper[4705]: E0216 16:33:05.422248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:14 crc kubenswrapper[4705]: E0216 16:33:14.424288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:16 crc kubenswrapper[4705]: I0216 16:33:16.437614 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:16 crc kubenswrapper[4705]: E0216 16:33:16.440170 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:16 crc kubenswrapper[4705]: E0216 16:33:16.440380 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:27 crc kubenswrapper[4705]: E0216 16:33:27.424237 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:28 crc kubenswrapper[4705]: E0216 16:33:28.422078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:29 crc kubenswrapper[4705]: I0216 16:33:29.420499 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:29 crc kubenswrapper[4705]: E0216 16:33:29.421644 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:41 crc kubenswrapper[4705]: E0216 16:33:41.422484 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:41 crc kubenswrapper[4705]: E0216 16:33:41.422534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:43 crc kubenswrapper[4705]: I0216 16:33:43.421543 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:43 crc kubenswrapper[4705]: E0216 16:33:43.422151 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:52 crc kubenswrapper[4705]: E0216 16:33:52.422945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:54 crc kubenswrapper[4705]: I0216 16:33:54.419811 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:54 crc kubenswrapper[4705]: E0216 16:33:54.421467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:56 crc kubenswrapper[4705]: E0216 16:33:56.429181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:06 crc kubenswrapper[4705]: E0216 16:34:06.432685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:34:07 crc kubenswrapper[4705]: I0216 16:34:07.420677 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:34:07 crc kubenswrapper[4705]: E0216 16:34:07.421006 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:07 crc kubenswrapper[4705]: E0216 16:34:07.421458 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.145616 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:16 crc kubenswrapper[4705]: E0216 16:34:16.147743 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.147763 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.148092 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.151708 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.164296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.288792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.289024 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.289294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391584 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391675 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.392203 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.392361 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.417325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.499360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.083431 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648579 4705 generic.go:334] "Generic (PLEG): container finished" podID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerID="1fabcd33a4de6ee5edbb119488563d893aee5b3a68182c6cb13f2e91e34c6dbf" exitCode=0 Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerDied","Data":"1fabcd33a4de6ee5edbb119488563d893aee5b3a68182c6cb13f2e91e34c6dbf"} Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"fa5f9737c04dea3d01df2d6a5370925204647cfdbdac5def9bb7f583ed6a048e"} Feb 16 16:34:18 crc kubenswrapper[4705]: I0216 16:34:18.664649 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2"} Feb 16 16:34:19 crc kubenswrapper[4705]: I0216 16:34:19.419978 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:34:19 crc kubenswrapper[4705]: E0216 16:34:19.420710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:34:20 crc kubenswrapper[4705]: E0216 16:34:20.422253 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:21 crc kubenswrapper[4705]: E0216 16:34:21.421219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:34:23 crc kubenswrapper[4705]: I0216 16:34:23.720734 4705 generic.go:334] "Generic (PLEG): container finished" podID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerID="38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2" exitCode=0 Feb 16 16:34:23 crc kubenswrapper[4705]: I0216 16:34:23.721449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerDied","Data":"38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2"} Feb 16 16:34:24 crc kubenswrapper[4705]: I0216 16:34:24.733150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"26e234b2759d2cb6166de47bddcc1e64fc3272dd06910c7334d08af2bfd11d13"} Feb 16 16:34:24 crc kubenswrapper[4705]: I0216 16:34:24.763681 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rt2v7" podStartSLOduration=2.273158653 podStartE2EDuration="8.763353603s" podCreationTimestamp="2026-02-16 16:34:16 +0000 UTC" firstStartedPulling="2026-02-16 16:34:17.650644527 +0000 UTC m=+6051.835621603" lastFinishedPulling="2026-02-16 16:34:24.140839457 +0000 UTC m=+6058.325816553" observedRunningTime="2026-02-16 16:34:24.758496306 +0000 UTC m=+6058.943473382" watchObservedRunningTime="2026-02-16 16:34:24.763353603 +0000 UTC m=+6058.948330679" Feb 16 16:34:26 crc kubenswrapper[4705]: I0216 16:34:26.500365 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:26 crc kubenswrapper[4705]: I0216 16:34:26.500872 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:27 crc kubenswrapper[4705]: I0216 16:34:27.553257 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rt2v7" podUID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:34:27 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:34:27 crc kubenswrapper[4705]: > var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144643436024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144643437017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144627171016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144627171015465 5ustar corecore